Jan 22 00:42:51.048963 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 21 22:02:49 -00 2026 Jan 22 00:42:51.048991 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=2c7ce323fe43e7b63a59c25601f0c418cba5a1d902eeaa4bfcebc579e79e52d2 Jan 22 00:42:51.049005 kernel: BIOS-provided physical RAM map: Jan 22 00:42:51.049013 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 22 00:42:51.049020 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 22 00:42:51.049028 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Jan 22 00:42:51.049037 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Jan 22 00:42:51.049046 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Jan 22 00:42:51.049053 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Jan 22 00:42:51.049063 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 22 00:42:51.049071 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 22 00:42:51.049078 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 22 00:42:51.049086 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 22 00:42:51.049094 kernel: printk: legacy bootconsole [earlyser0] enabled Jan 22 00:42:51.049106 kernel: NX (Execute Disable) protection: active Jan 22 00:42:51.049114 kernel: APIC: Static calls initialized Jan 22 00:42:51.049122 kernel: efi: EFI v2.7 by Microsoft Jan 22 00:42:51.049131 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3eaa3018 RNG=0x3ffd2018 Jan 22 00:42:51.049139 kernel: random: crng init done Jan 22 00:42:51.049147 kernel: secureboot: Secure boot disabled Jan 22 00:42:51.049156 kernel: SMBIOS 3.1.0 present. Jan 22 00:42:51.049164 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/25/2025 Jan 22 00:42:51.049172 kernel: DMI: Memory slots populated: 2/2 Jan 22 00:42:51.049180 kernel: Hypervisor detected: Microsoft Hyper-V Jan 22 00:42:51.049190 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Jan 22 00:42:51.049199 kernel: Hyper-V: Nested features: 0x3e0101 Jan 22 00:42:51.049207 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 22 00:42:51.049215 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 22 00:42:51.049223 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 22 00:42:51.049231 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 22 00:42:51.049240 kernel: tsc: Detected 2300.000 MHz processor Jan 22 00:42:51.049248 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 22 00:42:51.049258 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 22 00:42:51.049267 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Jan 22 00:42:51.049278 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 22 00:42:51.049287 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 22 00:42:51.049296 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Jan 22 00:42:51.049305 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Jan 22 00:42:51.049313 kernel: Using GB pages for direct mapping Jan 22 00:42:51.049322 kernel: ACPI: Early table checksum verification disabled Jan 22 00:42:51.049336 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 22 00:42:51.049346 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 22 00:42:51.049355 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 22 00:42:51.049364 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 22 00:42:51.049373 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 22 00:42:51.049382 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 22 00:42:51.049394 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 22 00:42:51.049403 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 22 00:42:51.049412 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Jan 22 00:42:51.049421 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Jan 22 00:42:51.049430 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 22 00:42:51.049439 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 22 00:42:51.049450 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Jan 22 00:42:51.049460 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 22 00:42:51.049469 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 22 00:42:51.049478 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 22 00:42:51.049487 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 22 00:42:51.049496 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 22 00:42:51.049505 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Jan 22 00:42:51.049516 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 22 00:42:51.049525 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 22 00:42:51.049534 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Jan 22 00:42:51.049544 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Jan 22 00:42:51.049553 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Jan 22 00:42:51.049562 kernel: Zone ranges: Jan 22 00:42:51.049571 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 22 00:42:51.049582 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 22 00:42:51.049591 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 22 00:42:51.049600 kernel: Device empty Jan 22 00:42:51.049609 kernel: Movable zone start for each node Jan 22 00:42:51.049619 kernel: Early memory node ranges Jan 22 00:42:51.049628 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 22 00:42:51.049637 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Jan 22 00:42:51.049648 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Jan 22 00:42:51.049657 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 22 00:42:51.049666 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 22 00:42:51.049675 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 22 00:42:51.049684 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 22 00:42:51.049693 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 22 00:42:51.049702 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 22 00:42:51.049713 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Jan 22 00:42:51.049723 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 22 00:42:51.049732 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 22 00:42:51.049757 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 22 00:42:51.049767 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 22 00:42:51.049776 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 22 00:42:51.049785 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 22 00:42:51.049796 kernel: TSC deadline timer available Jan 22 00:42:51.049805 kernel: CPU topo: Max. logical packages: 1 Jan 22 00:42:51.049814 kernel: CPU topo: Max. logical dies: 1 Jan 22 00:42:51.049823 kernel: CPU topo: Max. dies per package: 1 Jan 22 00:42:51.049832 kernel: CPU topo: Max. threads per core: 2 Jan 22 00:42:51.049841 kernel: CPU topo: Num. cores per package: 1 Jan 22 00:42:51.049850 kernel: CPU topo: Num. threads per package: 2 Jan 22 00:42:51.049859 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 22 00:42:51.049870 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 22 00:42:51.049879 kernel: Booting paravirtualized kernel on Hyper-V Jan 22 00:42:51.049889 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 22 00:42:51.049898 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 22 00:42:51.049907 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 22 00:42:51.049917 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 22 00:42:51.049926 kernel: pcpu-alloc: [0] 0 1 Jan 22 00:42:51.049937 kernel: Hyper-V: PV spinlocks enabled Jan 22 00:42:51.049946 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 22 00:42:51.049957 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=2c7ce323fe43e7b63a59c25601f0c418cba5a1d902eeaa4bfcebc579e79e52d2 Jan 22 00:42:51.049966 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 22 00:42:51.049975 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 22 00:42:51.049985 kernel: Fallback order for Node 0: 0 Jan 22 00:42:51.049996 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Jan 22 00:42:51.050005 kernel: Policy zone: Normal Jan 22 00:42:51.050014 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 22 00:42:51.050023 kernel: software IO TLB: area num 2. Jan 22 00:42:51.050032 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 22 00:42:51.050041 kernel: ftrace: allocating 40097 entries in 157 pages Jan 22 00:42:51.050050 kernel: ftrace: allocated 157 pages with 5 groups Jan 22 00:42:51.050061 kernel: Dynamic Preempt: voluntary Jan 22 00:42:51.050071 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 22 00:42:51.050081 kernel: rcu: RCU event tracing is enabled. Jan 22 00:42:51.050098 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 22 00:42:51.050110 kernel: Trampoline variant of Tasks RCU enabled. Jan 22 00:42:51.050120 kernel: Rude variant of Tasks RCU enabled. Jan 22 00:42:51.050130 kernel: Tracing variant of Tasks RCU enabled. Jan 22 00:42:51.050140 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 22 00:42:51.050150 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 22 00:42:51.050160 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 22 00:42:51.050171 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 22 00:42:51.050632 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 22 00:42:51.050644 kernel: Using NULL legacy PIC Jan 22 00:42:51.050959 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 22 00:42:51.050971 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 22 00:42:51.050982 kernel: Console: colour dummy device 80x25 Jan 22 00:42:51.050992 kernel: printk: legacy console [tty1] enabled Jan 22 00:42:51.051002 kernel: printk: legacy console [ttyS0] enabled Jan 22 00:42:51.051012 kernel: printk: legacy bootconsole [earlyser0] disabled Jan 22 00:42:51.051023 kernel: ACPI: Core revision 20240827 Jan 22 00:42:51.051033 kernel: Failed to register legacy timer interrupt Jan 22 00:42:51.051046 kernel: APIC: Switch to symmetric I/O mode setup Jan 22 00:42:51.051056 kernel: x2apic enabled Jan 22 00:42:51.051066 kernel: APIC: Switched APIC routing to: physical x2apic Jan 22 00:42:51.051076 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 22 00:42:51.051086 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 22 00:42:51.051096 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Jan 22 00:42:51.051106 kernel: Hyper-V: Using IPI hypercalls Jan 22 00:42:51.051118 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 22 00:42:51.051128 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 22 00:42:51.051138 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 22 00:42:51.051148 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 22 00:42:51.051158 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 22 00:42:51.051168 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 22 00:42:51.051179 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jan 22 00:42:51.051191 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4600.00 BogoMIPS (lpj=2300000) Jan 22 00:42:51.051201 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 22 00:42:51.051211 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 22 00:42:51.051221 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 22 00:42:51.051231 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 22 00:42:51.051241 kernel: Spectre V2 : Mitigation: Retpolines Jan 22 00:42:51.051250 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 22 00:42:51.051260 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 22 00:42:51.051272 kernel: RETBleed: Vulnerable Jan 22 00:42:51.051281 kernel: Speculative Store Bypass: Vulnerable Jan 22 00:42:51.051291 kernel: active return thunk: its_return_thunk Jan 22 00:42:51.051300 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 22 00:42:51.051309 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 22 00:42:51.051319 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 22 00:42:51.051328 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 22 00:42:51.051338 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 22 00:42:51.051347 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 22 00:42:51.051357 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 22 00:42:51.051368 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Jan 22 00:42:51.051378 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Jan 22 00:42:51.051387 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Jan 22 00:42:51.051397 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 22 00:42:51.051406 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 22 00:42:51.051415 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 22 00:42:51.051424 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 22 00:42:51.051434 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Jan 22 00:42:51.051444 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Jan 22 00:42:51.051453 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Jan 22 00:42:51.051463 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Jan 22 00:42:51.051474 kernel: Freeing SMP alternatives memory: 32K Jan 22 00:42:51.051484 kernel: pid_max: default: 32768 minimum: 301 Jan 22 00:42:51.051493 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 22 00:42:51.051503 kernel: landlock: Up and running. Jan 22 00:42:51.051512 kernel: SELinux: Initializing. Jan 22 00:42:51.051522 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 22 00:42:51.051531 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 22 00:42:51.051541 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Jan 22 00:42:51.051551 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Jan 22 00:42:51.051561 kernel: signal: max sigframe size: 11952 Jan 22 00:42:51.051573 kernel: rcu: Hierarchical SRCU implementation. Jan 22 00:42:51.051583 kernel: rcu: Max phase no-delay instances is 400. Jan 22 00:42:51.051594 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 22 00:42:51.051604 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 22 00:42:51.051614 kernel: smp: Bringing up secondary CPUs ... Jan 22 00:42:51.051624 kernel: smpboot: x86: Booting SMP configuration: Jan 22 00:42:51.051634 kernel: .... node #0, CPUs: #1 Jan 22 00:42:51.051645 kernel: smp: Brought up 1 node, 2 CPUs Jan 22 00:42:51.051655 kernel: smpboot: Total of 2 processors activated (9200.00 BogoMIPS) Jan 22 00:42:51.051666 kernel: Memory: 8095536K/8383228K available (14336K kernel code, 2445K rwdata, 29896K rodata, 15436K init, 2604K bss, 281556K reserved, 0K cma-reserved) Jan 22 00:42:51.051676 kernel: devtmpfs: initialized Jan 22 00:42:51.051686 kernel: x86/mm: Memory block size: 128MB Jan 22 00:42:51.051696 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 22 00:42:51.051706 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 22 00:42:51.051718 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 22 00:42:51.051728 kernel: pinctrl core: initialized pinctrl subsystem Jan 22 00:42:51.051766 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 22 00:42:51.051776 kernel: audit: initializing netlink subsys (disabled) Jan 22 00:42:51.051786 kernel: audit: type=2000 audit(1769042567.082:1): state=initialized audit_enabled=0 res=1 Jan 22 00:42:51.051796 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 22 00:42:51.051806 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 22 00:42:51.051818 kernel: cpuidle: using governor menu Jan 22 00:42:51.051828 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 22 00:42:51.051838 kernel: dca service started, version 1.12.1 Jan 22 00:42:51.051848 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Jan 22 00:42:51.051858 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Jan 22 00:42:51.051868 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 22 00:42:51.051878 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 22 00:42:51.051890 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 22 00:42:51.051900 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 22 00:42:51.051909 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 22 00:42:51.051918 kernel: ACPI: Added _OSI(Module Device) Jan 22 00:42:51.051928 kernel: ACPI: Added _OSI(Processor Device) Jan 22 00:42:51.051937 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 22 00:42:51.051947 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 22 00:42:51.051959 kernel: ACPI: Interpreter enabled Jan 22 00:42:51.051969 kernel: ACPI: PM: (supports S0 S5) Jan 22 00:42:51.051979 kernel: ACPI: Using IOAPIC for interrupt routing Jan 22 00:42:51.051989 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 22 00:42:51.051999 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 22 00:42:51.052009 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 22 00:42:51.052020 kernel: iommu: Default domain type: Translated Jan 22 00:42:51.052031 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 22 00:42:51.052043 kernel: efivars: Registered efivars operations Jan 22 00:42:51.052054 kernel: PCI: Using ACPI for IRQ routing Jan 22 00:42:51.052063 kernel: PCI: System does not support PCI Jan 22 00:42:51.052073 kernel: vgaarb: loaded Jan 22 00:42:51.052083 kernel: clocksource: Switched to clocksource tsc-early Jan 22 00:42:51.052093 kernel: VFS: Disk quotas dquot_6.6.0 Jan 22 00:42:51.052103 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 22 00:42:51.052115 kernel: pnp: PnP ACPI init Jan 22 00:42:51.052125 kernel: pnp: PnP ACPI: found 3 devices Jan 22 00:42:51.052135 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 22 00:42:51.052146 kernel: NET: Registered PF_INET protocol family Jan 22 00:42:51.052156 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 22 00:42:51.052166 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 22 00:42:51.052176 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 22 00:42:51.052188 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 22 00:42:51.052198 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 22 00:42:51.052208 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 22 00:42:51.052218 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 22 00:42:51.052228 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 22 00:42:51.052238 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 22 00:42:51.052248 kernel: NET: Registered PF_XDP protocol family Jan 22 00:42:51.052259 kernel: PCI: CLS 0 bytes, default 64 Jan 22 00:42:51.052269 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 22 00:42:51.052279 kernel: software IO TLB: mapped [mem 0x000000003a9ac000-0x000000003e9ac000] (64MB) Jan 22 00:42:51.052289 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Jan 22 00:42:51.052299 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Jan 22 00:42:51.052309 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jan 22 00:42:51.052319 kernel: clocksource: Switched to clocksource tsc Jan 22 00:42:51.052332 kernel: Initialise system trusted keyrings Jan 22 00:42:51.052342 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 22 00:42:51.052352 kernel: Key type asymmetric registered Jan 22 00:42:51.052362 kernel: Asymmetric key parser 'x509' registered Jan 22 00:42:51.052371 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 22 00:42:51.052381 kernel: io scheduler mq-deadline registered Jan 22 00:42:51.052391 kernel: io scheduler kyber registered Jan 22 00:42:51.052403 kernel: io scheduler bfq registered Jan 22 00:42:51.052413 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 22 00:42:51.052423 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 22 00:42:51.052433 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 22 00:42:51.052444 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 22 00:42:51.052454 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Jan 22 00:42:51.052464 kernel: i8042: PNP: No PS/2 controller found. Jan 22 00:42:51.052637 kernel: rtc_cmos 00:02: registered as rtc0 Jan 22 00:42:51.052775 kernel: rtc_cmos 00:02: setting system clock to 2026-01-22T00:42:49 UTC (1769042569) Jan 22 00:42:51.052885 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 22 00:42:51.052898 kernel: intel_pstate: Intel P-state driver initializing Jan 22 00:42:51.052908 kernel: efifb: probing for efifb Jan 22 00:42:51.052918 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 22 00:42:51.052931 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 22 00:42:51.052941 kernel: efifb: scrolling: redraw Jan 22 00:42:51.052951 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 22 00:42:51.052961 kernel: Console: switching to colour frame buffer device 128x48 Jan 22 00:42:51.052971 kernel: fb0: EFI VGA frame buffer device Jan 22 00:42:51.052981 kernel: pstore: Using crash dump compression: deflate Jan 22 00:42:51.052991 kernel: pstore: Registered efi_pstore as persistent store backend Jan 22 00:42:51.053004 kernel: NET: Registered PF_INET6 protocol family Jan 22 00:42:51.053014 kernel: Segment Routing with IPv6 Jan 22 00:42:51.053024 kernel: In-situ OAM (IOAM) with IPv6 Jan 22 00:42:51.053034 kernel: NET: Registered PF_PACKET protocol family Jan 22 00:42:51.053044 kernel: Key type dns_resolver registered Jan 22 00:42:51.053054 kernel: IPI shorthand broadcast: enabled Jan 22 00:42:51.053064 kernel: sched_clock: Marking stable (1967168671, 105394407)->(2397129241, -324566163) Jan 22 00:42:51.053075 kernel: registered taskstats version 1 Jan 22 00:42:51.053087 kernel: Loading compiled-in X.509 certificates Jan 22 00:42:51.053097 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 3c3e07c08e874e2a4bf964a0051bfd3618f8b847' Jan 22 00:42:51.053107 kernel: Demotion targets for Node 0: null Jan 22 00:42:51.053117 kernel: Key type .fscrypt registered Jan 22 00:42:51.053127 kernel: Key type fscrypt-provisioning registered Jan 22 00:42:51.053137 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 22 00:42:51.053147 kernel: ima: Allocated hash algorithm: sha1 Jan 22 00:42:51.053159 kernel: ima: No architecture policies found Jan 22 00:42:51.053170 kernel: clk: Disabling unused clocks Jan 22 00:42:51.053180 kernel: Freeing unused kernel image (initmem) memory: 15436K Jan 22 00:42:51.053190 kernel: Write protecting the kernel read-only data: 45056k Jan 22 00:42:51.053200 kernel: Freeing unused kernel image (rodata/data gap) memory: 824K Jan 22 00:42:51.053210 kernel: Run /init as init process Jan 22 00:42:51.053220 kernel: with arguments: Jan 22 00:42:51.053232 kernel: /init Jan 22 00:42:51.053242 kernel: with environment: Jan 22 00:42:51.053252 kernel: HOME=/ Jan 22 00:42:51.053262 kernel: TERM=linux Jan 22 00:42:51.053272 kernel: hv_vmbus: Vmbus version:5.3 Jan 22 00:42:51.053282 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 22 00:42:51.053292 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 22 00:42:51.053304 kernel: PTP clock support registered Jan 22 00:42:51.053315 kernel: hv_utils: Registering HyperV Utility Driver Jan 22 00:42:51.053325 kernel: hv_vmbus: registering driver hv_utils Jan 22 00:42:51.053335 kernel: hv_utils: Shutdown IC version 3.2 Jan 22 00:42:51.053345 kernel: hv_utils: Heartbeat IC version 3.0 Jan 22 00:42:51.053355 kernel: hv_utils: TimeSync IC version 4.0 Jan 22 00:42:51.053365 kernel: hv_vmbus: registering driver hv_pci Jan 22 00:42:51.053375 kernel: SCSI subsystem initialized Jan 22 00:42:51.053524 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Jan 22 00:42:51.053644 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Jan 22 00:42:51.053801 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Jan 22 00:42:51.053922 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Jan 22 00:42:51.054144 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Jan 22 00:42:51.054300 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Jan 22 00:42:51.054422 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Jan 22 00:42:51.054545 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Jan 22 00:42:51.054557 kernel: hv_vmbus: registering driver hv_storvsc Jan 22 00:42:51.054689 kernel: scsi host0: storvsc_host_t Jan 22 00:42:51.055238 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 22 00:42:51.055252 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 22 00:42:51.055263 kernel: hv_vmbus: registering driver hid_hyperv Jan 22 00:42:51.055273 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 22 00:42:51.055392 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 22 00:42:51.055405 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 22 00:42:51.055418 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 22 00:42:51.055526 kernel: nvme nvme0: pci function c05b:00:00.0 Jan 22 00:42:51.055654 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Jan 22 00:42:51.055759 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 22 00:42:51.055773 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 22 00:42:51.055898 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 22 00:42:51.055910 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 22 00:42:51.056032 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 22 00:42:51.056044 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 22 00:42:51.056053 kernel: device-mapper: uevent: version 1.0.3 Jan 22 00:42:51.056063 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 22 00:42:51.056073 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 22 00:42:51.056096 kernel: raid6: avx512x4 gen() 43492 MB/s Jan 22 00:42:51.056108 kernel: raid6: avx512x2 gen() 43009 MB/s Jan 22 00:42:51.056118 kernel: raid6: avx512x1 gen() 26277 MB/s Jan 22 00:42:51.056128 kernel: raid6: avx2x4 gen() 34751 MB/s Jan 22 00:42:51.056137 kernel: raid6: avx2x2 gen() 37457 MB/s Jan 22 00:42:51.056147 kernel: raid6: avx2x1 gen() 29562 MB/s Jan 22 00:42:51.056157 kernel: raid6: using algorithm avx512x4 gen() 43492 MB/s Jan 22 00:42:51.056170 kernel: raid6: .... xor() 7587 MB/s, rmw enabled Jan 22 00:42:51.056180 kernel: raid6: using avx512x2 recovery algorithm Jan 22 00:42:51.056190 kernel: xor: automatically using best checksumming function avx Jan 22 00:42:51.056200 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 22 00:42:51.056210 kernel: BTRFS: device fsid 79986906-7858-40a3-90f5-bda7e594a44c devid 1 transid 34 /dev/mapper/usr (254:0) scanned by mount (846) Jan 22 00:42:51.056220 kernel: BTRFS info (device dm-0): first mount of filesystem 79986906-7858-40a3-90f5-bda7e594a44c Jan 22 00:42:51.056230 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 22 00:42:51.056244 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 22 00:42:51.056255 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 22 00:42:51.056265 kernel: BTRFS info (device dm-0): enabling free space tree Jan 22 00:42:51.056276 kernel: loop: module loaded Jan 22 00:42:51.056286 kernel: loop0: detected capacity change from 0 to 100160 Jan 22 00:42:51.056296 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 22 00:42:51.056307 systemd[1]: Successfully made /usr/ read-only. Jan 22 00:42:51.056323 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 22 00:42:51.056335 systemd[1]: Detected virtualization microsoft. Jan 22 00:42:51.056345 systemd[1]: Detected architecture x86-64. Jan 22 00:42:51.056362 systemd[1]: Running in initrd. Jan 22 00:42:51.056372 systemd[1]: No hostname configured, using default hostname. Jan 22 00:42:51.056383 systemd[1]: Hostname set to . Jan 22 00:42:51.056397 systemd[1]: Initializing machine ID from random generator. Jan 22 00:42:51.056407 systemd[1]: Queued start job for default target initrd.target. Jan 22 00:42:51.056417 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 22 00:42:51.056428 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 22 00:42:51.056438 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 22 00:42:51.056450 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 22 00:42:51.056463 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 22 00:42:51.056475 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 22 00:42:51.056486 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 22 00:42:51.056499 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 22 00:42:51.056510 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 22 00:42:51.056521 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 22 00:42:51.056531 systemd[1]: Reached target paths.target - Path Units. Jan 22 00:42:51.056542 systemd[1]: Reached target slices.target - Slice Units. Jan 22 00:42:51.056553 systemd[1]: Reached target swap.target - Swaps. Jan 22 00:42:51.056563 systemd[1]: Reached target timers.target - Timer Units. Jan 22 00:42:51.056576 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 22 00:42:51.056586 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 22 00:42:51.056597 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 22 00:42:51.056608 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 22 00:42:51.056618 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 22 00:42:51.056629 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 22 00:42:51.056639 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 22 00:42:51.056651 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 22 00:42:51.056661 systemd[1]: Reached target sockets.target - Socket Units. Jan 22 00:42:51.056671 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 22 00:42:51.056682 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 22 00:42:51.056692 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 22 00:42:51.056702 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 22 00:42:51.056713 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 22 00:42:51.056726 systemd[1]: Starting systemd-fsck-usr.service... Jan 22 00:42:51.056745 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 22 00:42:51.056756 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 22 00:42:51.056767 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 22 00:42:51.056779 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 22 00:42:51.056790 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 22 00:42:51.056800 systemd[1]: Finished systemd-fsck-usr.service. Jan 22 00:42:51.056810 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 22 00:42:51.056838 systemd-journald[983]: Collecting audit messages is enabled. Jan 22 00:42:51.056864 systemd-journald[983]: Journal started Jan 22 00:42:51.056888 systemd-journald[983]: Runtime Journal (/run/log/journal/016f981af38048b99cb307d60cce9412) is 8M, max 158.5M, 150.5M free. Jan 22 00:42:51.059835 systemd[1]: Started systemd-journald.service - Journal Service. Jan 22 00:42:51.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:51.066802 kernel: audit: type=1130 audit(1769042571.060:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:51.069420 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 22 00:42:51.078962 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 22 00:42:51.083548 systemd-modules-load[984]: Inserted module 'br_netfilter' Jan 22 00:42:51.085047 kernel: Bridge firewalling registered Jan 22 00:42:51.086143 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 22 00:42:51.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:51.092391 kernel: audit: type=1130 audit(1769042571.084:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:51.091771 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 22 00:42:51.095673 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 22 00:42:51.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:51.102754 kernel: audit: type=1130 audit(1769042571.095:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:51.104903 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 22 00:42:51.110205 systemd-tmpfiles[995]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 22 00:42:51.123982 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 22 00:42:51.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:51.128785 kernel: audit: type=1130 audit(1769042571.122:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:51.131056 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 22 00:42:51.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:51.136976 kernel: audit: type=1130 audit(1769042571.130:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:51.137340 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 22 00:42:51.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:51.144757 kernel: audit: type=1130 audit(1769042571.138:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:51.144905 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 22 00:42:51.141000 audit: BPF prog-id=6 op=LOAD Jan 22 00:42:51.150548 kernel: audit: type=1334 audit(1769042571.141:8): prog-id=6 op=LOAD Jan 22 00:42:51.152430 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 22 00:42:51.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:51.162107 kernel: audit: type=1130 audit(1769042571.156:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:51.161630 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 22 00:42:51.184525 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 22 00:42:51.196279 kernel: audit: type=1130 audit(1769042571.188:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:51.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:51.193339 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 22 00:42:51.209312 systemd-resolved[1008]: Positive Trust Anchors: Jan 22 00:42:51.209324 systemd-resolved[1008]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 22 00:42:51.209328 systemd-resolved[1008]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 22 00:42:51.220877 dracut-cmdline[1023]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=2c7ce323fe43e7b63a59c25601f0c418cba5a1d902eeaa4bfcebc579e79e52d2 Jan 22 00:42:51.209365 systemd-resolved[1008]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 22 00:42:51.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:51.234167 systemd-resolved[1008]: Defaulting to hostname 'linux'. Jan 22 00:42:51.235006 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 22 00:42:51.246083 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 22 00:42:51.319756 kernel: Loading iSCSI transport class v2.0-870. Jan 22 00:42:51.342754 kernel: iscsi: registered transport (tcp) Jan 22 00:42:51.368953 kernel: iscsi: registered transport (qla4xxx) Jan 22 00:42:51.368995 kernel: QLogic iSCSI HBA Driver Jan 22 00:42:51.394645 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 22 00:42:51.406517 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 22 00:42:51.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:51.411726 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 22 00:42:51.444617 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 22 00:42:51.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:51.446877 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 22 00:42:51.448849 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 22 00:42:51.488459 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 22 00:42:51.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:51.490000 audit: BPF prog-id=7 op=LOAD Jan 22 00:42:51.490000 audit: BPF prog-id=8 op=LOAD Jan 22 00:42:51.493950 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 22 00:42:51.524874 systemd-udevd[1264]: Using default interface naming scheme 'v257'. Jan 22 00:42:51.535547 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 22 00:42:51.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:51.545855 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 22 00:42:51.555809 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 22 00:42:51.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:51.558000 audit: BPF prog-id=9 op=LOAD Jan 22 00:42:51.563861 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 22 00:42:51.575939 dracut-pre-trigger[1351]: rd.md=0: removing MD RAID activation Jan 22 00:42:51.604001 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 22 00:42:51.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:51.609885 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 22 00:42:51.620521 systemd-networkd[1363]: lo: Link UP Jan 22 00:42:51.620528 systemd-networkd[1363]: lo: Gained carrier Jan 22 00:42:51.623860 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 22 00:42:51.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:51.631716 systemd[1]: Reached target network.target - Network. Jan 22 00:42:51.664425 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 22 00:42:51.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:51.671802 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 22 00:42:51.728765 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#103 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 22 00:42:51.756753 kernel: cryptd: max_cpu_qlen set to 1000 Jan 22 00:42:51.759287 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 22 00:42:51.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:51.759400 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 22 00:42:51.763708 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 22 00:42:51.770823 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 22 00:42:51.787762 kernel: hv_vmbus: registering driver hv_netvsc Jan 22 00:42:51.790949 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 22 00:42:51.794011 kernel: nvme nvme0: using unchecked data buffer Jan 22 00:42:51.794423 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 22 00:42:51.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:51.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:51.804476 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 22 00:42:51.813361 kernel: AES CTR mode by8 optimization enabled Jan 22 00:42:51.822794 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d2c4093 (unnamed net_device) (uninitialized): VF slot 1 added Jan 22 00:42:51.878844 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 22 00:42:51.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:51.890929 systemd-networkd[1363]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 22 00:42:51.892185 systemd-networkd[1363]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 22 00:42:51.898611 systemd-networkd[1363]: eth0: Link UP Jan 22 00:42:51.900206 systemd-networkd[1363]: eth0: Gained carrier Jan 22 00:42:51.900221 systemd-networkd[1363]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 22 00:42:51.901671 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Jan 22 00:42:51.919811 systemd-networkd[1363]: eth0: DHCPv4 address 10.200.8.28/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 22 00:42:51.929643 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 22 00:42:51.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:51.939336 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Jan 22 00:42:51.949581 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Jan 22 00:42:51.958875 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jan 22 00:42:51.959143 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 22 00:42:51.959285 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 22 00:42:51.968474 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 22 00:42:51.974952 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 22 00:42:51.977917 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 22 00:42:52.005605 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 22 00:42:52.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:52.851665 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Jan 22 00:42:52.851934 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Jan 22 00:42:52.854822 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Jan 22 00:42:52.856497 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Jan 22 00:42:52.861867 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Jan 22 00:42:52.865750 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Jan 22 00:42:52.870802 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Jan 22 00:42:52.870870 kernel: pci 7870:00:00.0: enabling Extended Tags Jan 22 00:42:52.887890 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Jan 22 00:42:52.888112 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Jan 22 00:42:52.894765 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Jan 22 00:42:52.898988 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Jan 22 00:42:52.910750 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Jan 22 00:42:52.913315 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d2c4093 eth0: VF registering: eth1 Jan 22 00:42:52.913504 kernel: mana 7870:00:00.0 eth1: joined to eth0 Jan 22 00:42:52.917645 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Jan 22 00:42:52.918081 systemd-networkd[1363]: eth1: Interface name change detected, renamed to enP30832s1. Jan 22 00:42:53.022757 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jan 22 00:42:53.025749 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 22 00:42:53.028179 systemd-networkd[1363]: enP30832s1: Link UP Jan 22 00:42:53.030813 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d2c4093 eth0: Data path switched to VF: enP30832s1 Jan 22 00:42:53.028516 systemd-networkd[1363]: enP30832s1: Gained carrier Jan 22 00:42:53.086583 disk-uuid[1559]: Warning: The kernel is still using the old partition table. Jan 22 00:42:53.086583 disk-uuid[1559]: The new table will be used at the next reboot or after you Jan 22 00:42:53.086583 disk-uuid[1559]: run partprobe(8) or kpartx(8) Jan 22 00:42:53.086583 disk-uuid[1559]: The operation has completed successfully. Jan 22 00:42:53.096571 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 22 00:42:53.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:53.096680 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 22 00:42:53.109594 kernel: kauditd_printk_skb: 18 callbacks suppressed Jan 22 00:42:53.110804 kernel: audit: type=1130 audit(1769042573.098:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:53.110821 kernel: audit: type=1131 audit(1769042573.098:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:53.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:53.101697 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 22 00:42:53.136771 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1590) Jan 22 00:42:53.136810 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 04d4f92e-e2f4-4570-a15f-a84e10359254 Jan 22 00:42:53.139011 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 22 00:42:53.147931 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 22 00:42:53.147969 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 22 00:42:53.148833 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 22 00:42:53.154757 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 04d4f92e-e2f4-4570-a15f-a84e10359254 Jan 22 00:42:53.155534 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 22 00:42:53.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:53.161897 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 22 00:42:53.165606 kernel: audit: type=1130 audit(1769042573.155:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:53.433220 ignition[1609]: Ignition 2.22.0 Jan 22 00:42:53.433233 ignition[1609]: Stage: fetch-offline Jan 22 00:42:53.433356 ignition[1609]: no configs at "/usr/lib/ignition/base.d" Jan 22 00:42:53.444825 kernel: audit: type=1130 audit(1769042573.438:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:53.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:53.436484 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 22 00:42:53.433365 ignition[1609]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 22 00:42:53.441648 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 22 00:42:53.433467 ignition[1609]: parsed url from cmdline: "" Jan 22 00:42:53.433470 ignition[1609]: no config URL provided Jan 22 00:42:53.433476 ignition[1609]: reading system config file "/usr/lib/ignition/user.ign" Jan 22 00:42:53.433483 ignition[1609]: no config at "/usr/lib/ignition/user.ign" Jan 22 00:42:53.433488 ignition[1609]: failed to fetch config: resource requires networking Jan 22 00:42:53.434921 ignition[1609]: Ignition finished successfully Jan 22 00:42:53.475807 ignition[1616]: Ignition 2.22.0 Jan 22 00:42:53.475818 ignition[1616]: Stage: fetch Jan 22 00:42:53.476050 ignition[1616]: no configs at "/usr/lib/ignition/base.d" Jan 22 00:42:53.476058 ignition[1616]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 22 00:42:53.476147 ignition[1616]: parsed url from cmdline: "" Jan 22 00:42:53.476150 ignition[1616]: no config URL provided Jan 22 00:42:53.476155 ignition[1616]: reading system config file "/usr/lib/ignition/user.ign" Jan 22 00:42:53.476160 ignition[1616]: no config at "/usr/lib/ignition/user.ign" Jan 22 00:42:53.476184 ignition[1616]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 22 00:42:53.711633 ignition[1616]: GET result: OK Jan 22 00:42:53.711716 ignition[1616]: config has been read from IMDS userdata Jan 22 00:42:53.711773 ignition[1616]: parsing config with SHA512: 43db80249156f376772beebd3e99969780876f3e7813ac096d6fd2035b3a8fb2ce20c670e57227819be8cf74590b6564044aa1b9799ca73216887c74edaa9496 Jan 22 00:42:53.716108 unknown[1616]: fetched base config from "system" Jan 22 00:42:53.716689 ignition[1616]: fetch: fetch complete Jan 22 00:42:53.716123 unknown[1616]: fetched base config from "system" Jan 22 00:42:53.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:53.716694 ignition[1616]: fetch: fetch passed Jan 22 00:42:53.732251 kernel: audit: type=1130 audit(1769042573.722:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:53.716131 unknown[1616]: fetched user config from "azure" Jan 22 00:42:53.716763 ignition[1616]: Ignition finished successfully Jan 22 00:42:53.719238 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 22 00:42:53.726038 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 22 00:42:53.757096 ignition[1622]: Ignition 2.22.0 Jan 22 00:42:53.757105 ignition[1622]: Stage: kargs Jan 22 00:42:53.757344 ignition[1622]: no configs at "/usr/lib/ignition/base.d" Jan 22 00:42:53.757352 ignition[1622]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 22 00:42:53.771912 kernel: audit: type=1130 audit(1769042573.762:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:53.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:53.761014 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 22 00:42:53.758439 ignition[1622]: kargs: kargs passed Jan 22 00:42:53.765641 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 22 00:42:53.758483 ignition[1622]: Ignition finished successfully Jan 22 00:42:53.797236 ignition[1628]: Ignition 2.22.0 Jan 22 00:42:53.797245 ignition[1628]: Stage: disks Jan 22 00:42:53.797459 ignition[1628]: no configs at "/usr/lib/ignition/base.d" Jan 22 00:42:53.800109 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 22 00:42:53.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:53.797467 ignition[1628]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 22 00:42:53.810575 kernel: audit: type=1130 audit(1769042573.800:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:53.803179 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 22 00:42:53.798530 ignition[1628]: disks: disks passed Jan 22 00:42:53.811088 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 22 00:42:53.798567 ignition[1628]: Ignition finished successfully Jan 22 00:42:53.812745 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 22 00:42:53.815786 systemd[1]: Reached target sysinit.target - System Initialization. Jan 22 00:42:53.819779 systemd[1]: Reached target basic.target - Basic System. Jan 22 00:42:53.822424 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 22 00:42:53.829208 systemd-networkd[1363]: eth0: Gained IPv6LL Jan 22 00:42:53.872844 systemd-fsck[1637]: ROOT: clean, 15/6361680 files, 408771/6359552 blocks Jan 22 00:42:53.876664 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 22 00:42:53.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:53.889826 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 22 00:42:53.893333 kernel: audit: type=1130 audit(1769042573.884:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:54.043857 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 2fa3c08b-a48e-45e5-aeb3-7441bca9cf30 r/w with ordered data mode. Quota mode: none. Jan 22 00:42:54.044463 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 22 00:42:54.045544 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 22 00:42:54.053097 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 22 00:42:54.057924 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 22 00:42:54.064895 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 22 00:42:54.069382 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 22 00:42:54.069421 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 22 00:42:54.077756 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1646) Jan 22 00:42:54.078079 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 22 00:42:54.082975 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 04d4f92e-e2f4-4570-a15f-a84e10359254 Jan 22 00:42:54.083068 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 22 00:42:54.086715 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 22 00:42:54.095344 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 22 00:42:54.095392 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 22 00:42:54.097196 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 22 00:42:54.098218 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 22 00:42:54.222456 coreos-metadata[1648]: Jan 22 00:42:54.222 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 22 00:42:54.232970 coreos-metadata[1648]: Jan 22 00:42:54.232 INFO Fetch successful Jan 22 00:42:54.235822 coreos-metadata[1648]: Jan 22 00:42:54.233 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 22 00:42:54.248171 coreos-metadata[1648]: Jan 22 00:42:54.248 INFO Fetch successful Jan 22 00:42:54.252825 coreos-metadata[1648]: Jan 22 00:42:54.249 INFO wrote hostname ci-4515.1.0-n-d879fbfda5 to /sysroot/etc/hostname Jan 22 00:42:54.250773 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 22 00:42:54.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:54.263753 kernel: audit: type=1130 audit(1769042574.256:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:54.295008 initrd-setup-root[1677]: cut: /sysroot/etc/passwd: No such file or directory Jan 22 00:42:54.307573 initrd-setup-root[1684]: cut: /sysroot/etc/group: No such file or directory Jan 22 00:42:54.315697 initrd-setup-root[1691]: cut: /sysroot/etc/shadow: No such file or directory Jan 22 00:42:54.325979 initrd-setup-root[1698]: cut: /sysroot/etc/gshadow: No such file or directory Jan 22 00:42:54.540147 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 22 00:42:54.548866 kernel: audit: type=1130 audit(1769042574.541:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:54.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:54.546662 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 22 00:42:54.569197 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 22 00:42:54.582530 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 22 00:42:54.585881 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 04d4f92e-e2f4-4570-a15f-a84e10359254 Jan 22 00:42:54.610026 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 22 00:42:54.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:54.614297 ignition[1765]: INFO : Ignition 2.22.0 Jan 22 00:42:54.614297 ignition[1765]: INFO : Stage: mount Jan 22 00:42:54.618642 ignition[1765]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 22 00:42:54.618642 ignition[1765]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 22 00:42:54.618642 ignition[1765]: INFO : mount: mount passed Jan 22 00:42:54.618642 ignition[1765]: INFO : Ignition finished successfully Jan 22 00:42:54.616952 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 22 00:42:54.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:42:54.627516 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 22 00:42:54.639295 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 22 00:42:54.655781 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1777) Jan 22 00:42:54.655816 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 04d4f92e-e2f4-4570-a15f-a84e10359254 Jan 22 00:42:54.657777 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 22 00:42:54.662925 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 22 00:42:54.662969 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 22 00:42:54.664121 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 22 00:42:54.665894 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 22 00:42:54.694603 ignition[1793]: INFO : Ignition 2.22.0 Jan 22 00:42:54.694603 ignition[1793]: INFO : Stage: files Jan 22 00:42:54.698806 ignition[1793]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 22 00:42:54.698806 ignition[1793]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 22 00:42:54.698806 ignition[1793]: DEBUG : files: compiled without relabeling support, skipping Jan 22 00:42:54.707836 ignition[1793]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 22 00:42:54.707836 ignition[1793]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 22 00:42:54.719030 ignition[1793]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 22 00:42:54.721831 ignition[1793]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 22 00:42:54.721831 ignition[1793]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 22 00:42:54.719334 unknown[1793]: wrote ssh authorized keys file for user: core Jan 22 00:42:54.730215 ignition[1793]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 22 00:42:54.732699 ignition[1793]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 22 00:43:24.745240 ignition[1793]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET error: Get "https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz": dial tcp 13.107.213.52:443: i/o timeout Jan 22 00:43:24.945780 ignition[1793]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #2 Jan 22 00:43:27.063556 ignition[1793]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 22 00:43:27.148289 ignition[1793]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 22 00:43:27.151088 ignition[1793]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 22 00:43:27.153516 ignition[1793]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 22 00:43:27.153516 ignition[1793]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 22 00:43:27.162819 ignition[1793]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 22 00:43:27.162819 ignition[1793]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 22 00:43:27.162819 ignition[1793]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 22 00:43:27.162819 ignition[1793]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 22 00:43:27.162819 ignition[1793]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 22 00:43:27.162819 ignition[1793]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 22 00:43:27.162819 ignition[1793]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 22 00:43:27.162819 ignition[1793]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 22 00:43:27.162819 ignition[1793]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 22 00:43:27.162819 ignition[1793]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 22 00:43:27.203775 ignition[1793]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 22 00:43:27.665551 ignition[1793]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 22 00:43:28.251425 ignition[1793]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 22 00:43:28.251425 ignition[1793]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 22 00:43:28.261027 ignition[1793]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 22 00:43:28.268295 ignition[1793]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 22 00:43:28.268295 ignition[1793]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 22 00:43:28.268295 ignition[1793]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 22 00:43:28.276831 ignition[1793]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 22 00:43:28.276831 ignition[1793]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 22 00:43:28.276831 ignition[1793]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 22 00:43:28.276831 ignition[1793]: INFO : files: files passed Jan 22 00:43:28.276831 ignition[1793]: INFO : Ignition finished successfully Jan 22 00:43:28.315523 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 22 00:43:28.315553 kernel: audit: type=1130 audit(1769042608.278:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.315569 kernel: audit: type=1130 audit(1769042608.310:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.274778 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 22 00:43:28.282884 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 22 00:43:28.292305 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 22 00:43:28.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.308104 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 22 00:43:28.329125 kernel: audit: type=1131 audit(1769042608.310:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.308192 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 22 00:43:28.332061 initrd-setup-root-after-ignition[1826]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 22 00:43:28.332061 initrd-setup-root-after-ignition[1826]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 22 00:43:28.337812 initrd-setup-root-after-ignition[1830]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 22 00:43:28.345778 kernel: audit: type=1130 audit(1769042608.339:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.337283 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 22 00:43:28.342553 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 22 00:43:28.349902 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 22 00:43:28.382020 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 22 00:43:28.382104 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 22 00:43:28.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.393374 kernel: audit: type=1130 audit(1769042608.386:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.393461 kernel: audit: type=1131 audit(1769042608.386:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.388219 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 22 00:43:28.397442 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 22 00:43:28.400460 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 22 00:43:28.401903 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 22 00:43:28.428717 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 22 00:43:28.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.437845 kernel: audit: type=1130 audit(1769042608.428:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.435694 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 22 00:43:28.451241 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 22 00:43:28.451471 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 22 00:43:28.453217 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 22 00:43:28.456587 systemd[1]: Stopped target timers.target - Timer Units. Jan 22 00:43:28.463078 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 22 00:43:28.471803 kernel: audit: type=1131 audit(1769042608.465:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.463207 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 22 00:43:28.473215 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 22 00:43:28.477285 systemd[1]: Stopped target basic.target - Basic System. Jan 22 00:43:28.479500 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 22 00:43:28.483390 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 22 00:43:28.486573 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 22 00:43:28.489712 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 22 00:43:28.494490 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 22 00:43:28.496338 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 22 00:43:28.500094 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 22 00:43:28.504268 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 22 00:43:28.505982 systemd[1]: Stopped target swap.target - Swaps. Jan 22 00:43:28.509860 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 22 00:43:28.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.509998 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 22 00:43:28.528526 kernel: audit: type=1131 audit(1769042608.510:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.528551 kernel: audit: type=1131 audit(1769042608.517:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.516545 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 22 00:43:28.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.516867 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 22 00:43:28.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.517087 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 22 00:43:28.518448 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 22 00:43:28.518716 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 22 00:43:28.518839 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 22 00:43:28.519362 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 22 00:43:28.519453 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 22 00:43:28.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.519604 systemd[1]: ignition-files.service: Deactivated successfully. Jan 22 00:43:28.519683 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 22 00:43:28.529342 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 22 00:43:28.529493 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 22 00:43:28.535015 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 22 00:43:28.541644 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 22 00:43:28.548892 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 22 00:43:28.549054 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 22 00:43:28.585491 ignition[1850]: INFO : Ignition 2.22.0 Jan 22 00:43:28.585491 ignition[1850]: INFO : Stage: umount Jan 22 00:43:28.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.559490 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 22 00:43:28.591054 ignition[1850]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 22 00:43:28.591054 ignition[1850]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 22 00:43:28.591054 ignition[1850]: INFO : umount: umount passed Jan 22 00:43:28.591054 ignition[1850]: INFO : Ignition finished successfully Jan 22 00:43:28.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.559620 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 22 00:43:28.562328 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 22 00:43:28.562449 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 22 00:43:28.580753 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 22 00:43:28.580838 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 22 00:43:28.591444 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 22 00:43:28.593882 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 22 00:43:28.593989 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 22 00:43:28.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.594442 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 22 00:43:28.594518 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 22 00:43:28.594888 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 22 00:43:28.594926 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 22 00:43:28.594982 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 22 00:43:28.595010 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 22 00:43:28.595221 systemd[1]: Stopped target network.target - Network. Jan 22 00:43:28.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.644000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.595248 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 22 00:43:28.595282 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 22 00:43:28.595605 systemd[1]: Stopped target paths.target - Path Units. Jan 22 00:43:28.595626 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 22 00:43:28.596131 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 22 00:43:28.606813 systemd[1]: Stopped target slices.target - Slice Units. Jan 22 00:43:28.611829 systemd[1]: Stopped target sockets.target - Socket Units. Jan 22 00:43:28.615576 systemd[1]: iscsid.socket: Deactivated successfully. Jan 22 00:43:28.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.658000 audit: BPF prog-id=6 op=UNLOAD Jan 22 00:43:28.615613 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 22 00:43:28.659000 audit: BPF prog-id=9 op=UNLOAD Jan 22 00:43:28.618809 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 22 00:43:28.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.618845 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 22 00:43:28.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.622808 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 22 00:43:28.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.622833 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 22 00:43:28.625307 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 22 00:43:28.625352 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 22 00:43:28.627799 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 22 00:43:28.627839 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 22 00:43:28.630018 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 22 00:43:28.633869 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 22 00:43:28.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.638365 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 22 00:43:28.638456 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 22 00:43:28.644028 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 22 00:43:28.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.644131 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 22 00:43:28.649701 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 22 00:43:28.649790 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 22 00:43:28.660943 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 22 00:43:28.663835 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 22 00:43:28.663873 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 22 00:43:28.666239 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 22 00:43:28.666291 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 22 00:43:28.667530 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 22 00:43:28.667677 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 22 00:43:28.667723 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 22 00:43:28.673851 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 22 00:43:28.673906 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 22 00:43:28.677811 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 22 00:43:28.677856 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 22 00:43:28.680452 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 22 00:43:28.693213 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 22 00:43:28.693327 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 22 00:43:28.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.700547 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 22 00:43:28.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.700587 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 22 00:43:28.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.703847 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 22 00:43:28.703880 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 22 00:43:28.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.706986 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 22 00:43:28.707029 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 22 00:43:28.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.736794 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 22 00:43:28.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.736853 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 22 00:43:28.739152 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 22 00:43:28.739191 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 22 00:43:28.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.743851 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 22 00:43:28.745880 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 22 00:43:28.788694 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d2c4093 eth0: Data path switched from VF: enP30832s1 Jan 22 00:43:28.789464 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 22 00:43:28.745945 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 22 00:43:28.750332 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 22 00:43:28.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:28.751543 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 22 00:43:28.753632 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 22 00:43:28.753675 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 22 00:43:28.760851 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 22 00:43:28.760899 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 22 00:43:28.763395 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 22 00:43:28.763478 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 22 00:43:28.768970 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 22 00:43:28.769063 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 22 00:43:28.792528 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 22 00:43:28.792611 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 22 00:43:28.795294 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 22 00:43:28.798863 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 22 00:43:28.822198 systemd[1]: Switching root. Jan 22 00:43:28.867508 systemd-journald[983]: Journal stopped Jan 22 00:43:31.047644 systemd-journald[983]: Received SIGTERM from PID 1 (systemd). Jan 22 00:43:31.047671 kernel: SELinux: policy capability network_peer_controls=1 Jan 22 00:43:31.047685 kernel: SELinux: policy capability open_perms=1 Jan 22 00:43:31.047695 kernel: SELinux: policy capability extended_socket_class=1 Jan 22 00:43:31.047703 kernel: SELinux: policy capability always_check_network=0 Jan 22 00:43:31.047712 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 22 00:43:31.047722 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 22 00:43:31.047745 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 22 00:43:31.047755 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 22 00:43:31.047764 kernel: SELinux: policy capability userspace_initial_context=0 Jan 22 00:43:31.047774 systemd[1]: Successfully loaded SELinux policy in 71.921ms. Jan 22 00:43:31.047784 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.295ms. Jan 22 00:43:31.047795 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 22 00:43:31.047807 systemd[1]: Detected virtualization microsoft. Jan 22 00:43:31.047818 systemd[1]: Detected architecture x86-64. Jan 22 00:43:31.047828 systemd[1]: Detected first boot. Jan 22 00:43:31.047839 systemd[1]: Hostname set to . Jan 22 00:43:31.047851 systemd[1]: Initializing machine ID from random generator. Jan 22 00:43:31.047861 zram_generator::config[1894]: No configuration found. Jan 22 00:43:31.047872 kernel: Guest personality initialized and is inactive Jan 22 00:43:31.047882 kernel: VMCI host device registered (name=vmci, major=10, minor=259) Jan 22 00:43:31.047891 kernel: Initialized host personality Jan 22 00:43:31.047900 kernel: NET: Registered PF_VSOCK protocol family Jan 22 00:43:31.047909 systemd[1]: Populated /etc with preset unit settings. Jan 22 00:43:31.047921 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 22 00:43:31.047931 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 22 00:43:31.047941 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 22 00:43:31.047955 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 22 00:43:31.047965 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 22 00:43:31.047976 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 22 00:43:31.047988 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 22 00:43:31.047998 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 22 00:43:31.048008 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 22 00:43:31.048018 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 22 00:43:31.048028 systemd[1]: Created slice user.slice - User and Session Slice. Jan 22 00:43:31.048038 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 22 00:43:31.048049 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 22 00:43:31.048060 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 22 00:43:31.048070 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 22 00:43:31.048081 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 22 00:43:31.048094 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 22 00:43:31.048104 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 22 00:43:31.048116 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 22 00:43:31.048126 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 22 00:43:31.048136 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 22 00:43:31.048147 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 22 00:43:31.048157 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 22 00:43:31.048167 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 22 00:43:31.048177 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 22 00:43:31.048189 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 22 00:43:31.048199 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 22 00:43:31.048209 systemd[1]: Reached target slices.target - Slice Units. Jan 22 00:43:31.048220 systemd[1]: Reached target swap.target - Swaps. Jan 22 00:43:31.048230 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 22 00:43:31.048242 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 22 00:43:31.048255 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 22 00:43:31.048265 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 22 00:43:31.048276 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 22 00:43:31.048286 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 22 00:43:31.048298 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 22 00:43:31.048308 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 22 00:43:31.048318 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 22 00:43:31.048329 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 22 00:43:31.048339 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 22 00:43:31.048350 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 22 00:43:31.048360 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 22 00:43:31.048372 systemd[1]: Mounting media.mount - External Media Directory... Jan 22 00:43:31.048383 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 22 00:43:31.048393 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 22 00:43:31.048403 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 22 00:43:31.048414 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 22 00:43:31.048424 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 22 00:43:31.048436 systemd[1]: Reached target machines.target - Containers. Jan 22 00:43:31.048447 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 22 00:43:31.048457 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 22 00:43:31.048468 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 22 00:43:31.048479 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 22 00:43:31.048489 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 22 00:43:31.048499 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 22 00:43:31.048511 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 22 00:43:31.048522 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 22 00:43:31.048532 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 22 00:43:31.048542 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 22 00:43:31.048553 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 22 00:43:31.048563 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 22 00:43:31.048574 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 22 00:43:31.048585 systemd[1]: Stopped systemd-fsck-usr.service. Jan 22 00:43:31.048596 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 22 00:43:31.048607 kernel: fuse: init (API version 7.41) Jan 22 00:43:31.048617 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 22 00:43:31.048627 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 22 00:43:31.048637 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 22 00:43:31.048649 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 22 00:43:31.048660 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 22 00:43:31.048670 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 22 00:43:31.048681 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 22 00:43:31.048692 kernel: ACPI: bus type drm_connector registered Jan 22 00:43:31.048702 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 22 00:43:31.048714 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 22 00:43:31.048725 systemd[1]: Mounted media.mount - External Media Directory. Jan 22 00:43:31.052771 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 22 00:43:31.052795 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 22 00:43:31.052806 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 22 00:43:31.052817 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 22 00:43:31.052847 systemd-journald[1988]: Collecting audit messages is enabled. Jan 22 00:43:31.052875 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 22 00:43:31.052886 systemd-journald[1988]: Journal started Jan 22 00:43:31.052912 systemd-journald[1988]: Runtime Journal (/run/log/journal/ad36bb443cce4b50b9e9a06332dddddc) is 8M, max 158.5M, 150.5M free. Jan 22 00:43:30.752000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 22 00:43:30.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:30.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:30.930000 audit: BPF prog-id=14 op=UNLOAD Jan 22 00:43:30.930000 audit: BPF prog-id=13 op=UNLOAD Jan 22 00:43:30.930000 audit: BPF prog-id=15 op=LOAD Jan 22 00:43:30.930000 audit: BPF prog-id=16 op=LOAD Jan 22 00:43:30.930000 audit: BPF prog-id=17 op=LOAD Jan 22 00:43:31.042000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 22 00:43:31.042000 audit[1988]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffce70928e0 a2=4000 a3=0 items=0 ppid=1 pid=1988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:43:31.042000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 22 00:43:31.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:30.627044 systemd[1]: Queued start job for default target multi-user.target. Jan 22 00:43:30.632278 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 22 00:43:30.632614 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 22 00:43:31.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:31.064965 systemd[1]: Started systemd-journald.service - Journal Service. Jan 22 00:43:31.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:31.064333 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 22 00:43:31.064498 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 22 00:43:31.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:31.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:31.066853 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 22 00:43:31.067153 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 22 00:43:31.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:31.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:31.069389 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 22 00:43:31.069550 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 22 00:43:31.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:31.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:31.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:31.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:31.073062 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 22 00:43:31.073221 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 22 00:43:31.075697 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 22 00:43:31.075856 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 22 00:43:31.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:31.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:31.078724 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 22 00:43:31.078893 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 22 00:43:31.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:31.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:31.081430 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 22 00:43:31.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:31.086213 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 22 00:43:31.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:31.090046 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 22 00:43:31.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:31.093046 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 22 00:43:31.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:31.100867 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 22 00:43:31.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:31.105192 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 22 00:43:31.107286 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 22 00:43:31.108782 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 22 00:43:31.108864 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 22 00:43:31.112468 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 22 00:43:31.114196 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 22 00:43:31.114268 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 22 00:43:31.115102 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 22 00:43:31.117839 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 22 00:43:31.120846 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 22 00:43:31.122081 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 22 00:43:31.125872 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 22 00:43:31.131685 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 22 00:43:31.136332 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 22 00:43:31.139970 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 22 00:43:31.149245 systemd-journald[1988]: Time spent on flushing to /var/log/journal/ad36bb443cce4b50b9e9a06332dddddc is 37.898ms for 1128 entries. Jan 22 00:43:31.149245 systemd-journald[1988]: System Journal (/var/log/journal/ad36bb443cce4b50b9e9a06332dddddc) is 8M, max 2.2G, 2.2G free. Jan 22 00:43:31.281006 systemd-journald[1988]: Received client request to flush runtime journal. Jan 22 00:43:31.281074 kernel: loop1: detected capacity change from 0 to 27736 Jan 22 00:43:31.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:31.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:31.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:31.155876 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 22 00:43:31.158380 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 22 00:43:31.162234 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 22 00:43:31.183220 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 22 00:43:31.210239 systemd-tmpfiles[2036]: ACLs are not supported, ignoring. Jan 22 00:43:31.210252 systemd-tmpfiles[2036]: ACLs are not supported, ignoring. Jan 22 00:43:31.213372 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 22 00:43:31.218879 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 22 00:43:31.286004 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 22 00:43:31.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:31.330351 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 22 00:43:31.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:31.352523 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 22 00:43:31.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:31.357000 audit: BPF prog-id=18 op=LOAD Jan 22 00:43:31.357000 audit: BPF prog-id=19 op=LOAD Jan 22 00:43:31.357000 audit: BPF prog-id=20 op=LOAD Jan 22 00:43:31.361898 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 22 00:43:31.364000 audit: BPF prog-id=21 op=LOAD Jan 22 00:43:31.368774 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 22 00:43:31.372829 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 22 00:43:31.396000 audit: BPF prog-id=22 op=LOAD Jan 22 00:43:31.396000 audit: BPF prog-id=23 op=LOAD Jan 22 00:43:31.396000 audit: BPF prog-id=24 op=LOAD Jan 22 00:43:31.398635 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 22 00:43:31.400454 systemd-tmpfiles[2055]: ACLs are not supported, ignoring. Jan 22 00:43:31.400712 systemd-tmpfiles[2055]: ACLs are not supported, ignoring. Jan 22 00:43:31.401000 audit: BPF prog-id=25 op=LOAD Jan 22 00:43:31.401000 audit: BPF prog-id=26 op=LOAD Jan 22 00:43:31.401000 audit: BPF prog-id=27 op=LOAD Jan 22 00:43:31.404220 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 22 00:43:31.405483 kernel: loop2: detected capacity change from 0 to 119256 Jan 22 00:43:31.406291 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 22 00:43:31.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:31.472459 systemd-nsresourced[2057]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 22 00:43:31.475299 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 22 00:43:31.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:31.483907 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 22 00:43:31.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:31.524128 kernel: loop3: detected capacity change from 0 to 111544 Jan 22 00:43:31.583501 systemd-oomd[2052]: No swap; memory pressure usage will be degraded Jan 22 00:43:31.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:31.584363 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 22 00:43:31.603366 systemd-resolved[2054]: Positive Trust Anchors: Jan 22 00:43:31.603617 systemd-resolved[2054]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 22 00:43:31.603660 systemd-resolved[2054]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 22 00:43:31.603726 systemd-resolved[2054]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 22 00:43:31.633805 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 22 00:43:31.637843 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 22 00:43:31.641518 systemd-resolved[2054]: Using system hostname 'ci-4515.1.0-n-d879fbfda5'. Jan 22 00:43:31.643841 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 22 00:43:31.650029 kernel: loop4: detected capacity change from 0 to 224512 Jan 22 00:43:31.650601 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 22 00:43:31.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:31.652948 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 22 00:43:31.659205 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 22 00:43:31.661766 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 22 00:43:31.721756 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 22 00:43:31.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:31.721000 audit: BPF prog-id=8 op=UNLOAD Jan 22 00:43:31.721000 audit: BPF prog-id=7 op=UNLOAD Jan 22 00:43:31.722000 audit: BPF prog-id=28 op=LOAD Jan 22 00:43:31.722000 audit: BPF prog-id=29 op=LOAD Jan 22 00:43:31.724504 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 22 00:43:31.731760 kernel: loop5: detected capacity change from 0 to 27736 Jan 22 00:43:31.750795 kernel: loop6: detected capacity change from 0 to 119256 Jan 22 00:43:31.751953 systemd-udevd[2084]: Using default interface naming scheme 'v257'. Jan 22 00:43:31.765772 kernel: loop7: detected capacity change from 0 to 111544 Jan 22 00:43:31.782769 kernel: loop1: detected capacity change from 0 to 224512 Jan 22 00:43:31.802313 (sd-merge)[2085]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-azure.raw'. Jan 22 00:43:31.805062 (sd-merge)[2085]: Merged extensions into '/usr'. Jan 22 00:43:31.809338 systemd[1]: Reload requested from client PID 2035 ('systemd-sysext') (unit systemd-sysext.service)... Jan 22 00:43:31.809353 systemd[1]: Reloading... Jan 22 00:43:31.914760 zram_generator::config[2135]: No configuration found. Jan 22 00:43:31.941786 kernel: mousedev: PS/2 mouse device common for all mice Jan 22 00:43:31.968759 kernel: hv_vmbus: registering driver hv_balloon Jan 22 00:43:31.974041 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 22 00:43:31.985759 kernel: hv_vmbus: registering driver hyperv_fb Jan 22 00:43:31.989771 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 22 00:43:31.992753 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 22 00:43:31.995364 kernel: Console: switching to colour dummy device 80x25 Jan 22 00:43:32.001143 kernel: Console: switching to colour frame buffer device 128x48 Jan 22 00:43:32.118768 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#90 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 22 00:43:32.321693 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 22 00:43:32.322261 systemd[1]: Reloading finished in 512 ms. Jan 22 00:43:32.337358 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 22 00:43:32.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:32.340158 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 22 00:43:32.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:32.371528 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jan 22 00:43:32.391751 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jan 22 00:43:32.408626 systemd[1]: Starting ensure-sysext.service... Jan 22 00:43:32.414830 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 22 00:43:32.421000 audit: BPF prog-id=30 op=LOAD Jan 22 00:43:32.425035 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 22 00:43:32.429007 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 22 00:43:32.432684 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 22 00:43:32.433000 audit: BPF prog-id=31 op=LOAD Jan 22 00:43:32.435000 audit: BPF prog-id=25 op=UNLOAD Jan 22 00:43:32.435000 audit: BPF prog-id=32 op=LOAD Jan 22 00:43:32.435000 audit: BPF prog-id=33 op=LOAD Jan 22 00:43:32.435000 audit: BPF prog-id=26 op=UNLOAD Jan 22 00:43:32.435000 audit: BPF prog-id=27 op=UNLOAD Jan 22 00:43:32.439000 audit: BPF prog-id=34 op=LOAD Jan 22 00:43:32.439000 audit: BPF prog-id=18 op=UNLOAD Jan 22 00:43:32.439000 audit: BPF prog-id=35 op=LOAD Jan 22 00:43:32.439000 audit: BPF prog-id=36 op=LOAD Jan 22 00:43:32.439000 audit: BPF prog-id=19 op=UNLOAD Jan 22 00:43:32.439000 audit: BPF prog-id=20 op=UNLOAD Jan 22 00:43:32.440000 audit: BPF prog-id=37 op=LOAD Jan 22 00:43:32.440000 audit: BPF prog-id=15 op=UNLOAD Jan 22 00:43:32.440000 audit: BPF prog-id=38 op=LOAD Jan 22 00:43:32.440000 audit: BPF prog-id=39 op=LOAD Jan 22 00:43:32.441000 audit: BPF prog-id=16 op=UNLOAD Jan 22 00:43:32.441000 audit: BPF prog-id=17 op=UNLOAD Jan 22 00:43:32.441000 audit: BPF prog-id=40 op=LOAD Jan 22 00:43:32.441000 audit: BPF prog-id=22 op=UNLOAD Jan 22 00:43:32.441000 audit: BPF prog-id=41 op=LOAD Jan 22 00:43:32.441000 audit: BPF prog-id=42 op=LOAD Jan 22 00:43:32.441000 audit: BPF prog-id=23 op=UNLOAD Jan 22 00:43:32.441000 audit: BPF prog-id=24 op=UNLOAD Jan 22 00:43:32.442000 audit: BPF prog-id=43 op=LOAD Jan 22 00:43:32.442000 audit: BPF prog-id=44 op=LOAD Jan 22 00:43:32.442000 audit: BPF prog-id=28 op=UNLOAD Jan 22 00:43:32.442000 audit: BPF prog-id=29 op=UNLOAD Jan 22 00:43:32.442000 audit: BPF prog-id=45 op=LOAD Jan 22 00:43:32.442000 audit: BPF prog-id=21 op=UNLOAD Jan 22 00:43:32.452274 systemd-tmpfiles[2243]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 22 00:43:32.452525 systemd-tmpfiles[2243]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 22 00:43:32.452822 systemd-tmpfiles[2243]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 22 00:43:32.453389 systemd[1]: Reload requested from client PID 2240 ('systemctl') (unit ensure-sysext.service)... Jan 22 00:43:32.453406 systemd[1]: Reloading... Jan 22 00:43:32.453965 systemd-tmpfiles[2243]: ACLs are not supported, ignoring. Jan 22 00:43:32.454089 systemd-tmpfiles[2243]: ACLs are not supported, ignoring. Jan 22 00:43:32.466674 systemd-tmpfiles[2243]: Detected autofs mount point /boot during canonicalization of boot. Jan 22 00:43:32.467025 systemd-tmpfiles[2243]: Skipping /boot Jan 22 00:43:32.474003 systemd-tmpfiles[2243]: Detected autofs mount point /boot during canonicalization of boot. Jan 22 00:43:32.474086 systemd-tmpfiles[2243]: Skipping /boot Jan 22 00:43:32.535826 zram_generator::config[2283]: No configuration found. Jan 22 00:43:32.541242 systemd-networkd[2242]: lo: Link UP Jan 22 00:43:32.541477 systemd-networkd[2242]: lo: Gained carrier Jan 22 00:43:32.543188 systemd-networkd[2242]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 22 00:43:32.543509 systemd-networkd[2242]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 22 00:43:32.544767 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jan 22 00:43:32.550780 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 22 00:43:32.554767 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d2c4093 eth0: Data path switched to VF: enP30832s1 Jan 22 00:43:32.554012 systemd-networkd[2242]: enP30832s1: Link UP Jan 22 00:43:32.554170 systemd-networkd[2242]: eth0: Link UP Jan 22 00:43:32.554173 systemd-networkd[2242]: eth0: Gained carrier Jan 22 00:43:32.554190 systemd-networkd[2242]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 22 00:43:32.558417 systemd-networkd[2242]: enP30832s1: Gained carrier Jan 22 00:43:32.565805 systemd-networkd[2242]: eth0: DHCPv4 address 10.200.8.28/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 22 00:43:32.737652 systemd[1]: Reloading finished in 281 ms. Jan 22 00:43:32.758816 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 22 00:43:32.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:32.761000 audit: BPF prog-id=46 op=LOAD Jan 22 00:43:32.761000 audit: BPF prog-id=40 op=UNLOAD Jan 22 00:43:32.761000 audit: BPF prog-id=47 op=LOAD Jan 22 00:43:32.761000 audit: BPF prog-id=48 op=LOAD Jan 22 00:43:32.761000 audit: BPF prog-id=41 op=UNLOAD Jan 22 00:43:32.761000 audit: BPF prog-id=42 op=UNLOAD Jan 22 00:43:32.761000 audit: BPF prog-id=49 op=LOAD Jan 22 00:43:32.761000 audit: BPF prog-id=31 op=UNLOAD Jan 22 00:43:32.761000 audit: BPF prog-id=50 op=LOAD Jan 22 00:43:32.761000 audit: BPF prog-id=51 op=LOAD Jan 22 00:43:32.761000 audit: BPF prog-id=32 op=UNLOAD Jan 22 00:43:32.761000 audit: BPF prog-id=33 op=UNLOAD Jan 22 00:43:32.762000 audit: BPF prog-id=52 op=LOAD Jan 22 00:43:32.762000 audit: BPF prog-id=45 op=UNLOAD Jan 22 00:43:32.763000 audit: BPF prog-id=53 op=LOAD Jan 22 00:43:32.763000 audit: BPF prog-id=30 op=UNLOAD Jan 22 00:43:32.764000 audit: BPF prog-id=54 op=LOAD Jan 22 00:43:32.764000 audit: BPF prog-id=37 op=UNLOAD Jan 22 00:43:32.764000 audit: BPF prog-id=55 op=LOAD Jan 22 00:43:32.764000 audit: BPF prog-id=56 op=LOAD Jan 22 00:43:32.764000 audit: BPF prog-id=38 op=UNLOAD Jan 22 00:43:32.764000 audit: BPF prog-id=39 op=UNLOAD Jan 22 00:43:32.764000 audit: BPF prog-id=57 op=LOAD Jan 22 00:43:32.764000 audit: BPF prog-id=58 op=LOAD Jan 22 00:43:32.764000 audit: BPF prog-id=43 op=UNLOAD Jan 22 00:43:32.764000 audit: BPF prog-id=44 op=UNLOAD Jan 22 00:43:32.766000 audit: BPF prog-id=59 op=LOAD Jan 22 00:43:32.770000 audit: BPF prog-id=34 op=UNLOAD Jan 22 00:43:32.770000 audit: BPF prog-id=60 op=LOAD Jan 22 00:43:32.770000 audit: BPF prog-id=61 op=LOAD Jan 22 00:43:32.770000 audit: BPF prog-id=35 op=UNLOAD Jan 22 00:43:32.770000 audit: BPF prog-id=36 op=UNLOAD Jan 22 00:43:32.773876 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 22 00:43:32.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:32.777376 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 22 00:43:32.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:32.783013 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 22 00:43:32.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:32.791650 systemd[1]: Reached target network.target - Network. Jan 22 00:43:32.793890 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 22 00:43:32.808019 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 22 00:43:32.812684 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 22 00:43:32.820617 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 22 00:43:32.824285 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 22 00:43:32.827999 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 22 00:43:32.832008 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 22 00:43:32.838817 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 22 00:43:32.839019 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 22 00:43:32.841986 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 22 00:43:32.851000 audit[2357]: SYSTEM_BOOT pid=2357 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 22 00:43:32.853367 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 22 00:43:32.862724 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 22 00:43:32.866914 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 22 00:43:32.867090 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 22 00:43:32.867193 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 22 00:43:32.867364 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 22 00:43:32.875065 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 22 00:43:32.875267 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 22 00:43:32.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:32.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:32.883862 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 22 00:43:32.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:32.889525 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 22 00:43:32.889717 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 22 00:43:32.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:32.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:32.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:32.894251 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 22 00:43:32.899106 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 22 00:43:32.899274 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 22 00:43:32.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:32.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:32.905436 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 22 00:43:32.905641 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 22 00:43:32.909548 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 22 00:43:32.911458 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 22 00:43:32.911655 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 22 00:43:32.911860 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 22 00:43:32.911963 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 22 00:43:32.912053 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 22 00:43:32.913688 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 22 00:43:32.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:32.920689 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 22 00:43:32.920901 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 22 00:43:32.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:32.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:32.925829 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 22 00:43:32.926098 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 22 00:43:32.927488 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 22 00:43:32.932597 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 22 00:43:32.941129 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 22 00:43:32.943177 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 22 00:43:32.943354 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 22 00:43:32.943395 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 22 00:43:32.943452 systemd[1]: Reached target time-set.target - System Time Set. Jan 22 00:43:32.945385 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 22 00:43:32.948872 systemd[1]: Finished ensure-sysext.service. Jan 22 00:43:32.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:43:32.955920 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 22 00:43:32.955000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 22 00:43:32.955000 audit[2390]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe343564c0 a2=420 a3=0 items=0 ppid=2347 pid=2390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:43:32.955000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 22 00:43:32.960408 augenrules[2390]: No rules Jan 22 00:43:32.956953 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 22 00:43:32.960442 systemd[1]: audit-rules.service: Deactivated successfully. Jan 22 00:43:32.960797 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 22 00:43:32.962870 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 22 00:43:32.963058 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 22 00:43:32.964607 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 22 00:43:32.964837 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 22 00:43:32.970159 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 22 00:43:32.970220 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 22 00:43:33.109399 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 22 00:43:33.113167 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 22 00:43:34.250177 ldconfig[2349]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 22 00:43:34.260565 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 22 00:43:34.263147 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 22 00:43:34.275988 systemd-networkd[2242]: eth0: Gained IPv6LL Jan 22 00:43:34.279080 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 22 00:43:34.283111 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 22 00:43:34.285825 systemd[1]: Reached target network-online.target - Network is Online. Jan 22 00:43:34.287461 systemd[1]: Reached target sysinit.target - System Initialization. Jan 22 00:43:34.289021 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 22 00:43:34.291819 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 22 00:43:34.293376 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 22 00:43:34.294724 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 22 00:43:34.297846 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 22 00:43:34.300793 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 22 00:43:34.303833 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 22 00:43:34.306808 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 22 00:43:34.309780 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 22 00:43:34.309812 systemd[1]: Reached target paths.target - Path Units. Jan 22 00:43:34.310847 systemd[1]: Reached target timers.target - Timer Units. Jan 22 00:43:34.316066 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 22 00:43:34.318394 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 22 00:43:34.321313 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 22 00:43:34.325931 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 22 00:43:34.328811 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 22 00:43:34.350204 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 22 00:43:34.351963 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 22 00:43:34.355399 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 22 00:43:34.357564 systemd[1]: Reached target sockets.target - Socket Units. Jan 22 00:43:34.358923 systemd[1]: Reached target basic.target - Basic System. Jan 22 00:43:34.360229 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 22 00:43:34.360252 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 22 00:43:34.365436 systemd[1]: Starting chronyd.service - NTP client/server... Jan 22 00:43:34.369835 systemd[1]: Starting containerd.service - containerd container runtime... Jan 22 00:43:34.383866 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 22 00:43:34.386800 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 22 00:43:34.390913 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 22 00:43:34.401073 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 22 00:43:34.404399 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 22 00:43:34.406071 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 22 00:43:34.409923 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 22 00:43:34.414132 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Jan 22 00:43:34.416161 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 22 00:43:34.418438 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 22 00:43:34.421856 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 00:43:34.427284 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 22 00:43:34.431556 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 22 00:43:34.436507 KVP[2417]: KVP starting; pid is:2417 Jan 22 00:43:34.437914 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 22 00:43:34.443613 google_oslogin_nss_cache[2416]: oslogin_cache_refresh[2416]: Refreshing passwd entry cache Jan 22 00:43:34.443923 oslogin_cache_refresh[2416]: Refreshing passwd entry cache Jan 22 00:43:34.444876 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 22 00:43:34.449527 jq[2411]: false Jan 22 00:43:34.452012 KVP[2417]: KVP LIC Version: 3.1 Jan 22 00:43:34.452321 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 22 00:43:34.453114 kernel: hv_utils: KVP IC version 4.0 Jan 22 00:43:34.461108 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 22 00:43:34.462893 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 22 00:43:34.463334 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 22 00:43:34.464625 google_oslogin_nss_cache[2416]: oslogin_cache_refresh[2416]: Failure getting users, quitting Jan 22 00:43:34.464625 google_oslogin_nss_cache[2416]: oslogin_cache_refresh[2416]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 22 00:43:34.464625 google_oslogin_nss_cache[2416]: oslogin_cache_refresh[2416]: Refreshing group entry cache Jan 22 00:43:34.464242 oslogin_cache_refresh[2416]: Failure getting users, quitting Jan 22 00:43:34.464260 oslogin_cache_refresh[2416]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 22 00:43:34.464309 oslogin_cache_refresh[2416]: Refreshing group entry cache Jan 22 00:43:34.464989 systemd[1]: Starting update-engine.service - Update Engine... Jan 22 00:43:34.473585 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 22 00:43:34.479443 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 22 00:43:34.482086 extend-filesystems[2415]: Found /dev/nvme0n1p6 Jan 22 00:43:34.483132 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 22 00:43:34.486915 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 22 00:43:34.506555 extend-filesystems[2415]: Found /dev/nvme0n1p9 Jan 22 00:43:34.509944 google_oslogin_nss_cache[2416]: oslogin_cache_refresh[2416]: Failure getting groups, quitting Jan 22 00:43:34.509944 google_oslogin_nss_cache[2416]: oslogin_cache_refresh[2416]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 22 00:43:34.509340 oslogin_cache_refresh[2416]: Failure getting groups, quitting Jan 22 00:43:34.509351 oslogin_cache_refresh[2416]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 22 00:43:34.514214 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 22 00:43:34.522906 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 22 00:43:34.526703 extend-filesystems[2415]: Checking size of /dev/nvme0n1p9 Jan 22 00:43:34.525423 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 22 00:43:34.525645 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 22 00:43:34.541758 jq[2429]: true Jan 22 00:43:34.545130 chronyd[2406]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 22 00:43:34.557985 update_engine[2428]: I20260122 00:43:34.550628 2428 main.cc:92] Flatcar Update Engine starting Jan 22 00:43:34.548975 systemd[1]: motdgen.service: Deactivated successfully. Jan 22 00:43:34.549978 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 22 00:43:34.567756 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 22 00:43:34.575686 chronyd[2406]: Timezone right/UTC failed leap second check, ignoring Jan 22 00:43:34.575859 chronyd[2406]: Loaded seccomp filter (level 2) Jan 22 00:43:34.576964 systemd[1]: Started chronyd.service - NTP client/server. Jan 22 00:43:34.584450 dbus-daemon[2409]: [system] SELinux support is enabled Jan 22 00:43:34.584618 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 22 00:43:34.590073 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 22 00:43:34.590101 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 22 00:43:34.592203 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 22 00:43:34.592219 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 22 00:43:34.594617 tar[2435]: linux-amd64/LICENSE Jan 22 00:43:34.594617 tar[2435]: linux-amd64/helm Jan 22 00:43:34.595512 jq[2465]: true Jan 22 00:43:34.615340 extend-filesystems[2415]: Resized partition /dev/nvme0n1p9 Jan 22 00:43:34.617030 update_engine[2428]: I20260122 00:43:34.615180 2428 update_check_scheduler.cc:74] Next update check in 5m17s Jan 22 00:43:34.611230 systemd[1]: Started update-engine.service - Update Engine. Jan 22 00:43:34.629266 extend-filesystems[2478]: resize2fs 1.47.3 (8-Jul-2025) Jan 22 00:43:34.637307 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 6359552 to 6376955 blocks Jan 22 00:43:34.627430 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 22 00:43:34.646762 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 6376955 Jan 22 00:43:34.657560 coreos-metadata[2408]: Jan 22 00:43:34.657 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 22 00:43:34.681375 coreos-metadata[2408]: Jan 22 00:43:34.662 INFO Fetch successful Jan 22 00:43:34.681375 coreos-metadata[2408]: Jan 22 00:43:34.662 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 22 00:43:34.681375 coreos-metadata[2408]: Jan 22 00:43:34.667 INFO Fetch successful Jan 22 00:43:34.681375 coreos-metadata[2408]: Jan 22 00:43:34.667 INFO Fetching http://168.63.129.16/machine/e8f4d84e-b23c-435d-b706-91df5a39ae99/4a5ec680%2D86fe%2D41b5%2Da2dd%2D9267ecfac60e.%5Fci%2D4515.1.0%2Dn%2Dd879fbfda5?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 22 00:43:34.681375 coreos-metadata[2408]: Jan 22 00:43:34.670 INFO Fetch successful Jan 22 00:43:34.681375 coreos-metadata[2408]: Jan 22 00:43:34.670 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 22 00:43:34.659125 systemd-logind[2425]: New seat seat0. Jan 22 00:43:34.681131 systemd-logind[2425]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 22 00:43:34.681344 systemd[1]: Started systemd-logind.service - User Login Management. Jan 22 00:43:34.686831 coreos-metadata[2408]: Jan 22 00:43:34.685 INFO Fetch successful Jan 22 00:43:34.699056 extend-filesystems[2478]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 22 00:43:34.699056 extend-filesystems[2478]: old_desc_blocks = 4, new_desc_blocks = 4 Jan 22 00:43:34.699056 extend-filesystems[2478]: The filesystem on /dev/nvme0n1p9 is now 6376955 (4k) blocks long. Jan 22 00:43:34.698111 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 22 00:43:34.716226 extend-filesystems[2415]: Resized filesystem in /dev/nvme0n1p9 Jan 22 00:43:34.698352 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 22 00:43:34.759606 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 22 00:43:34.761807 bash[2498]: Updated "/home/core/.ssh/authorized_keys" Jan 22 00:43:34.765097 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 22 00:43:34.770891 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 22 00:43:34.771291 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 22 00:43:34.779873 sshd_keygen[2450]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 22 00:43:34.903345 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 22 00:43:34.912297 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 22 00:43:34.917238 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 22 00:43:34.975307 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 22 00:43:34.986287 systemd[1]: issuegen.service: Deactivated successfully. Jan 22 00:43:34.986529 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 22 00:43:34.999115 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 22 00:43:35.005980 locksmithd[2479]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 22 00:43:35.024569 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 22 00:43:35.032298 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 22 00:43:35.035681 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 22 00:43:35.038066 systemd[1]: Reached target getty.target - Login Prompts. Jan 22 00:43:35.185245 containerd[2460]: time="2026-01-22T00:43:35Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 22 00:43:35.185245 containerd[2460]: time="2026-01-22T00:43:35.181723935Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 22 00:43:35.211339 containerd[2460]: time="2026-01-22T00:43:35.211297767Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.454µs" Jan 22 00:43:35.211519 containerd[2460]: time="2026-01-22T00:43:35.211448058Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 22 00:43:35.211519 containerd[2460]: time="2026-01-22T00:43:35.211488036Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 22 00:43:35.211519 containerd[2460]: time="2026-01-22T00:43:35.211501531Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 22 00:43:35.211769 containerd[2460]: time="2026-01-22T00:43:35.211713815Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 22 00:43:35.211769 containerd[2460]: time="2026-01-22T00:43:35.211728271Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 22 00:43:35.211904 containerd[2460]: time="2026-01-22T00:43:35.211890271Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 22 00:43:35.211957 containerd[2460]: time="2026-01-22T00:43:35.211948398Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 22 00:43:35.212279 containerd[2460]: time="2026-01-22T00:43:35.212260538Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 22 00:43:35.212449 containerd[2460]: time="2026-01-22T00:43:35.212354004Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 22 00:43:35.212449 containerd[2460]: time="2026-01-22T00:43:35.212380868Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 22 00:43:35.212449 containerd[2460]: time="2026-01-22T00:43:35.212389731Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 22 00:43:35.212728 containerd[2460]: time="2026-01-22T00:43:35.212714223Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 22 00:43:35.212794 containerd[2460]: time="2026-01-22T00:43:35.212785711Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 22 00:43:35.213014 containerd[2460]: time="2026-01-22T00:43:35.212939176Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 22 00:43:35.213185 containerd[2460]: time="2026-01-22T00:43:35.213170712Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 22 00:43:35.213258 containerd[2460]: time="2026-01-22T00:43:35.213245516Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 22 00:43:35.213302 containerd[2460]: time="2026-01-22T00:43:35.213288140Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 22 00:43:35.213410 containerd[2460]: time="2026-01-22T00:43:35.213361178Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 22 00:43:35.213719 containerd[2460]: time="2026-01-22T00:43:35.213706692Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 22 00:43:35.213836 containerd[2460]: time="2026-01-22T00:43:35.213827371Z" level=info msg="metadata content store policy set" policy=shared Jan 22 00:43:35.231649 containerd[2460]: time="2026-01-22T00:43:35.231603190Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 22 00:43:35.231753 containerd[2460]: time="2026-01-22T00:43:35.231704580Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 22 00:43:35.234753 containerd[2460]: time="2026-01-22T00:43:35.232825112Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 22 00:43:35.234753 containerd[2460]: time="2026-01-22T00:43:35.232856350Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 22 00:43:35.234753 containerd[2460]: time="2026-01-22T00:43:35.232874925Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 22 00:43:35.234753 containerd[2460]: time="2026-01-22T00:43:35.232892206Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 22 00:43:35.234753 containerd[2460]: time="2026-01-22T00:43:35.232904664Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 22 00:43:35.234753 containerd[2460]: time="2026-01-22T00:43:35.232916338Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 22 00:43:35.234753 containerd[2460]: time="2026-01-22T00:43:35.232929373Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 22 00:43:35.234753 containerd[2460]: time="2026-01-22T00:43:35.232941698Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 22 00:43:35.234753 containerd[2460]: time="2026-01-22T00:43:35.232952592Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 22 00:43:35.234753 containerd[2460]: time="2026-01-22T00:43:35.232963067Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 22 00:43:35.234753 containerd[2460]: time="2026-01-22T00:43:35.232973399Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 22 00:43:35.234753 containerd[2460]: time="2026-01-22T00:43:35.232985497Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 22 00:43:35.234753 containerd[2460]: time="2026-01-22T00:43:35.233087801Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 22 00:43:35.235049 containerd[2460]: time="2026-01-22T00:43:35.233105151Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 22 00:43:35.235049 containerd[2460]: time="2026-01-22T00:43:35.233119324Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 22 00:43:35.235049 containerd[2460]: time="2026-01-22T00:43:35.233130250Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 22 00:43:35.235049 containerd[2460]: time="2026-01-22T00:43:35.233140644Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 22 00:43:35.235049 containerd[2460]: time="2026-01-22T00:43:35.233151290Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 22 00:43:35.235049 containerd[2460]: time="2026-01-22T00:43:35.233163037Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 22 00:43:35.235049 containerd[2460]: time="2026-01-22T00:43:35.233173686Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 22 00:43:35.235049 containerd[2460]: time="2026-01-22T00:43:35.233191346Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 22 00:43:35.235049 containerd[2460]: time="2026-01-22T00:43:35.233203177Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 22 00:43:35.235049 containerd[2460]: time="2026-01-22T00:43:35.233214306Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 22 00:43:35.235049 containerd[2460]: time="2026-01-22T00:43:35.233240638Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 22 00:43:35.235049 containerd[2460]: time="2026-01-22T00:43:35.233287574Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 22 00:43:35.235049 containerd[2460]: time="2026-01-22T00:43:35.233299911Z" level=info msg="Start snapshots syncer" Jan 22 00:43:35.235049 containerd[2460]: time="2026-01-22T00:43:35.233337715Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 22 00:43:35.235331 containerd[2460]: time="2026-01-22T00:43:35.233613275Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 22 00:43:35.235331 containerd[2460]: time="2026-01-22T00:43:35.233662704Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 22 00:43:35.235460 containerd[2460]: time="2026-01-22T00:43:35.233709247Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 22 00:43:35.235460 containerd[2460]: time="2026-01-22T00:43:35.233804808Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 22 00:43:35.235460 containerd[2460]: time="2026-01-22T00:43:35.233829228Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 22 00:43:35.235460 containerd[2460]: time="2026-01-22T00:43:35.233846032Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 22 00:43:35.235460 containerd[2460]: time="2026-01-22T00:43:35.233858236Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 22 00:43:35.235460 containerd[2460]: time="2026-01-22T00:43:35.233870419Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 22 00:43:35.235460 containerd[2460]: time="2026-01-22T00:43:35.233883369Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 22 00:43:35.235460 containerd[2460]: time="2026-01-22T00:43:35.233894598Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 22 00:43:35.235460 containerd[2460]: time="2026-01-22T00:43:35.233908057Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 22 00:43:35.235460 containerd[2460]: time="2026-01-22T00:43:35.233922122Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 22 00:43:35.235460 containerd[2460]: time="2026-01-22T00:43:35.233957267Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 22 00:43:35.235460 containerd[2460]: time="2026-01-22T00:43:35.233970319Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 22 00:43:35.235460 containerd[2460]: time="2026-01-22T00:43:35.233979293Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 22 00:43:35.235704 containerd[2460]: time="2026-01-22T00:43:35.233988848Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 22 00:43:35.235704 containerd[2460]: time="2026-01-22T00:43:35.233996410Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 22 00:43:35.235704 containerd[2460]: time="2026-01-22T00:43:35.234006857Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 22 00:43:35.235704 containerd[2460]: time="2026-01-22T00:43:35.234016958Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 22 00:43:35.235704 containerd[2460]: time="2026-01-22T00:43:35.234031058Z" level=info msg="runtime interface created" Jan 22 00:43:35.235704 containerd[2460]: time="2026-01-22T00:43:35.234036217Z" level=info msg="created NRI interface" Jan 22 00:43:35.235704 containerd[2460]: time="2026-01-22T00:43:35.234044664Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 22 00:43:35.235704 containerd[2460]: time="2026-01-22T00:43:35.234057707Z" level=info msg="Connect containerd service" Jan 22 00:43:35.235704 containerd[2460]: time="2026-01-22T00:43:35.234077026Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 22 00:43:35.235704 containerd[2460]: time="2026-01-22T00:43:35.234671056Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 22 00:43:35.411548 tar[2435]: linux-amd64/README.md Jan 22 00:43:35.435654 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 22 00:43:35.444604 containerd[2460]: time="2026-01-22T00:43:35.444539852Z" level=info msg="Start subscribing containerd event" Jan 22 00:43:35.444701 containerd[2460]: time="2026-01-22T00:43:35.444603509Z" level=info msg="Start recovering state" Jan 22 00:43:35.444728 containerd[2460]: time="2026-01-22T00:43:35.444716137Z" level=info msg="Start event monitor" Jan 22 00:43:35.444779 containerd[2460]: time="2026-01-22T00:43:35.444728822Z" level=info msg="Start cni network conf syncer for default" Jan 22 00:43:35.444920 containerd[2460]: time="2026-01-22T00:43:35.444904378Z" level=info msg="Start streaming server" Jan 22 00:43:35.444949 containerd[2460]: time="2026-01-22T00:43:35.444928993Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 22 00:43:35.444949 containerd[2460]: time="2026-01-22T00:43:35.444938603Z" level=info msg="runtime interface starting up..." Jan 22 00:43:35.445070 containerd[2460]: time="2026-01-22T00:43:35.444945465Z" level=info msg="starting plugins..." Jan 22 00:43:35.445093 containerd[2460]: time="2026-01-22T00:43:35.445080422Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 22 00:43:35.445158 containerd[2460]: time="2026-01-22T00:43:35.444862931Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 22 00:43:35.445204 containerd[2460]: time="2026-01-22T00:43:35.445187257Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 22 00:43:35.445962 containerd[2460]: time="2026-01-22T00:43:35.445942764Z" level=info msg="containerd successfully booted in 0.266790s" Jan 22 00:43:35.446179 systemd[1]: Started containerd.service - containerd container runtime. Jan 22 00:43:35.864588 waagent[2541]: 2026-01-22T00:43:35.864184Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jan 22 00:43:35.867392 waagent[2541]: 2026-01-22T00:43:35.865908Z INFO Daemon Daemon OS: flatcar 4515.1.0 Jan 22 00:43:35.867703 waagent[2541]: 2026-01-22T00:43:35.867653Z INFO Daemon Daemon Python: 3.11.13 Jan 22 00:43:35.869761 waagent[2541]: 2026-01-22T00:43:35.868812Z INFO Daemon Daemon Run daemon Jan 22 00:43:35.871137 waagent[2541]: 2026-01-22T00:43:35.870896Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4515.1.0' Jan 22 00:43:35.873838 waagent[2541]: 2026-01-22T00:43:35.873795Z INFO Daemon Daemon Using waagent for provisioning Jan 22 00:43:35.876971 waagent[2541]: 2026-01-22T00:43:35.876939Z INFO Daemon Daemon Activate resource disk Jan 22 00:43:35.879824 waagent[2541]: 2026-01-22T00:43:35.879791Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 22 00:43:35.884348 waagent[2541]: 2026-01-22T00:43:35.884305Z INFO Daemon Daemon Found device: None Jan 22 00:43:35.887252 waagent[2541]: 2026-01-22T00:43:35.886819Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 22 00:43:35.889844 waagent[2541]: 2026-01-22T00:43:35.889807Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 22 00:43:35.895663 waagent[2541]: 2026-01-22T00:43:35.895382Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 22 00:43:35.898933 waagent[2541]: 2026-01-22T00:43:35.898863Z INFO Daemon Daemon Running default provisioning handler Jan 22 00:43:35.907922 waagent[2541]: 2026-01-22T00:43:35.907875Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 22 00:43:35.912257 waagent[2541]: 2026-01-22T00:43:35.912213Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 22 00:43:35.916188 waagent[2541]: 2026-01-22T00:43:35.915800Z INFO Daemon Daemon cloud-init is enabled: False Jan 22 00:43:35.919966 waagent[2541]: 2026-01-22T00:43:35.919820Z INFO Daemon Daemon Copying ovf-env.xml Jan 22 00:43:35.958610 waagent[2541]: 2026-01-22T00:43:35.958562Z INFO Daemon Daemon Successfully mounted dvd Jan 22 00:43:35.971120 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 22 00:43:35.972372 waagent[2541]: 2026-01-22T00:43:35.972324Z INFO Daemon Daemon Detect protocol endpoint Jan 22 00:43:35.975233 waagent[2541]: 2026-01-22T00:43:35.974786Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 22 00:43:35.977830 waagent[2541]: 2026-01-22T00:43:35.977806Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 22 00:43:35.980832 waagent[2541]: 2026-01-22T00:43:35.980790Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 22 00:43:35.982405 waagent[2541]: 2026-01-22T00:43:35.982362Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 22 00:43:35.983966 waagent[2541]: 2026-01-22T00:43:35.983507Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 22 00:43:36.011586 waagent[2541]: 2026-01-22T00:43:36.011330Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 22 00:43:36.023006 waagent[2541]: 2026-01-22T00:43:36.021961Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 22 00:43:36.024868 waagent[2541]: 2026-01-22T00:43:36.024816Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 22 00:43:36.066758 waagent[2541]: 2026-01-22T00:43:36.066353Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 22 00:43:36.068377 waagent[2541]: 2026-01-22T00:43:36.068322Z INFO Daemon Daemon Forcing an update of the goal state. Jan 22 00:43:36.075185 waagent[2541]: 2026-01-22T00:43:36.075149Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 22 00:43:36.090658 waagent[2541]: 2026-01-22T00:43:36.090340Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 22 00:43:36.093631 waagent[2541]: 2026-01-22T00:43:36.093144Z INFO Daemon Jan 22 00:43:36.094219 waagent[2541]: 2026-01-22T00:43:36.094173Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 8dc8ba4f-e0f7-421e-8477-5076896ca423 eTag: 10229288943627376116 source: Fabric] Jan 22 00:43:36.098485 waagent[2541]: 2026-01-22T00:43:36.097292Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 22 00:43:36.099586 waagent[2541]: 2026-01-22T00:43:36.099542Z INFO Daemon Jan 22 00:43:36.101184 waagent[2541]: 2026-01-22T00:43:36.100558Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 22 00:43:36.112339 waagent[2541]: 2026-01-22T00:43:36.112025Z INFO Daemon Daemon Downloading artifacts profile blob Jan 22 00:43:36.149789 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:43:36.153405 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 22 00:43:36.157237 systemd[1]: Startup finished in 3.149s (kernel) + 39.118s (initrd) + 6.460s (userspace) = 48.728s. Jan 22 00:43:36.159015 (kubelet)[2580]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 22 00:43:36.231535 waagent[2541]: 2026-01-22T00:43:36.231483Z INFO Daemon Downloaded certificate {'thumbprint': 'AB1CF2332987C44E3D7091599DD7EBDDF2FAC0B5', 'hasPrivateKey': True} Jan 22 00:43:36.234289 waagent[2541]: 2026-01-22T00:43:36.234240Z INFO Daemon Fetch goal state completed Jan 22 00:43:36.241930 waagent[2541]: 2026-01-22T00:43:36.241898Z INFO Daemon Daemon Starting provisioning Jan 22 00:43:36.242715 waagent[2541]: 2026-01-22T00:43:36.242541Z INFO Daemon Daemon Handle ovf-env.xml. Jan 22 00:43:36.244789 waagent[2541]: 2026-01-22T00:43:36.244724Z INFO Daemon Daemon Set hostname [ci-4515.1.0-n-d879fbfda5] Jan 22 00:43:36.251152 waagent[2541]: 2026-01-22T00:43:36.251114Z INFO Daemon Daemon Publish hostname [ci-4515.1.0-n-d879fbfda5] Jan 22 00:43:36.252050 waagent[2541]: 2026-01-22T00:43:36.251882Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 22 00:43:36.252050 waagent[2541]: 2026-01-22T00:43:36.252193Z INFO Daemon Daemon Primary interface is [eth0] Jan 22 00:43:36.260073 systemd-networkd[2242]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 22 00:43:36.260081 systemd-networkd[2242]: eth0: Reconfiguring with /usr/lib/systemd/network/zz-default.network. Jan 22 00:43:36.260143 systemd-networkd[2242]: eth0: DHCP lease lost Jan 22 00:43:36.270404 waagent[2541]: 2026-01-22T00:43:36.270362Z INFO Daemon Daemon Create user account if not exists Jan 22 00:43:36.271249 waagent[2541]: 2026-01-22T00:43:36.270987Z INFO Daemon Daemon User core already exists, skip useradd Jan 22 00:43:36.274972 waagent[2541]: 2026-01-22T00:43:36.271262Z INFO Daemon Daemon Configure sudoer Jan 22 00:43:36.275927 waagent[2541]: 2026-01-22T00:43:36.275882Z INFO Daemon Daemon Configure sshd Jan 22 00:43:36.284301 waagent[2541]: 2026-01-22T00:43:36.280174Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 22 00:43:36.284301 waagent[2541]: 2026-01-22T00:43:36.280698Z INFO Daemon Daemon Deploy ssh public key. Jan 22 00:43:36.307775 systemd-networkd[2242]: eth0: DHCPv4 address 10.200.8.28/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 22 00:43:36.585698 login[2545]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 22 00:43:36.589191 login[2546]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 22 00:43:36.594477 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 22 00:43:36.595829 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 22 00:43:36.603546 systemd-logind[2425]: New session 1 of user core. Jan 22 00:43:36.608216 systemd-logind[2425]: New session 2 of user core. Jan 22 00:43:36.618488 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 22 00:43:36.621059 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 22 00:43:36.643166 (systemd)[2602]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 22 00:43:36.644974 systemd-logind[2425]: New session c1 of user core. Jan 22 00:43:36.810460 systemd[2602]: Queued start job for default target default.target. Jan 22 00:43:36.818058 systemd[2602]: Created slice app.slice - User Application Slice. Jan 22 00:43:36.818098 systemd[2602]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 22 00:43:36.818112 systemd[2602]: Reached target paths.target - Paths. Jan 22 00:43:36.818154 systemd[2602]: Reached target timers.target - Timers. Jan 22 00:43:36.819592 systemd[2602]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 22 00:43:36.820818 systemd[2602]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 22 00:43:36.835170 systemd[2602]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 22 00:43:36.835349 systemd[2602]: Reached target sockets.target - Sockets. Jan 22 00:43:36.837644 systemd[2602]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 22 00:43:36.837834 systemd[2602]: Reached target basic.target - Basic System. Jan 22 00:43:36.837887 systemd[2602]: Reached target default.target - Main User Target. Jan 22 00:43:36.837912 systemd[2602]: Startup finished in 188ms. Jan 22 00:43:36.838184 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 22 00:43:36.841941 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 22 00:43:36.842808 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 22 00:43:36.900720 kubelet[2580]: E0122 00:43:36.900664 2580 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 22 00:43:36.902912 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 22 00:43:36.903154 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 22 00:43:36.904294 systemd[1]: kubelet.service: Consumed 961ms CPU time, 265.2M memory peak. Jan 22 00:43:47.051825 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 22 00:43:47.053788 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 00:43:47.510768 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:43:47.525934 (kubelet)[2643]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 22 00:43:47.560381 kubelet[2643]: E0122 00:43:47.560343 2643 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 22 00:43:47.563448 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 22 00:43:47.563581 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 22 00:43:47.563951 systemd[1]: kubelet.service: Consumed 134ms CPU time, 110.6M memory peak. Jan 22 00:43:57.801945 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 22 00:43:57.803528 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 00:43:58.235423 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:43:58.245964 (kubelet)[2658]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 22 00:43:58.281478 kubelet[2658]: E0122 00:43:58.281442 2658 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 22 00:43:58.283138 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 22 00:43:58.283269 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 22 00:43:58.283631 systemd[1]: kubelet.service: Consumed 126ms CPU time, 108.5M memory peak. Jan 22 00:43:58.361669 chronyd[2406]: Selected source PHC0 Jan 22 00:44:06.342197 waagent[2541]: 2026-01-22T00:44:06.342145Z INFO Daemon Daemon Provisioning complete Jan 22 00:44:06.359012 waagent[2541]: 2026-01-22T00:44:06.358977Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 22 00:44:06.359727 waagent[2541]: 2026-01-22T00:44:06.359497Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 22 00:44:06.362658 waagent[2541]: 2026-01-22T00:44:06.362588Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jan 22 00:44:06.464518 waagent[2665]: 2026-01-22T00:44:06.464439Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jan 22 00:44:06.464883 waagent[2665]: 2026-01-22T00:44:06.464553Z INFO ExtHandler ExtHandler OS: flatcar 4515.1.0 Jan 22 00:44:06.464883 waagent[2665]: 2026-01-22T00:44:06.464594Z INFO ExtHandler ExtHandler Python: 3.11.13 Jan 22 00:44:06.464883 waagent[2665]: 2026-01-22T00:44:06.464631Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Jan 22 00:44:06.482374 waagent[2665]: 2026-01-22T00:44:06.482315Z INFO ExtHandler ExtHandler Distro: flatcar-4515.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jan 22 00:44:06.482519 waagent[2665]: 2026-01-22T00:44:06.482483Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 22 00:44:06.482571 waagent[2665]: 2026-01-22T00:44:06.482548Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 22 00:44:06.488546 waagent[2665]: 2026-01-22T00:44:06.488491Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 22 00:44:06.499568 waagent[2665]: 2026-01-22T00:44:06.499536Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 22 00:44:06.499952 waagent[2665]: 2026-01-22T00:44:06.499919Z INFO ExtHandler Jan 22 00:44:06.500003 waagent[2665]: 2026-01-22T00:44:06.499977Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 6bca4a6b-3d07-48bb-96f2-1e037b1140d7 eTag: 10229288943627376116 source: Fabric] Jan 22 00:44:06.500202 waagent[2665]: 2026-01-22T00:44:06.500172Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 22 00:44:06.500525 waagent[2665]: 2026-01-22T00:44:06.500498Z INFO ExtHandler Jan 22 00:44:06.500565 waagent[2665]: 2026-01-22T00:44:06.500539Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 22 00:44:06.506192 waagent[2665]: 2026-01-22T00:44:06.506165Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 22 00:44:06.587845 waagent[2665]: 2026-01-22T00:44:06.587795Z INFO ExtHandler Downloaded certificate {'thumbprint': 'AB1CF2332987C44E3D7091599DD7EBDDF2FAC0B5', 'hasPrivateKey': True} Jan 22 00:44:06.588181 waagent[2665]: 2026-01-22T00:44:06.588152Z INFO ExtHandler Fetch goal state completed Jan 22 00:44:06.600223 waagent[2665]: 2026-01-22T00:44:06.600142Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.3 30 Sep 2025 (Library: OpenSSL 3.4.3 30 Sep 2025) Jan 22 00:44:06.604167 waagent[2665]: 2026-01-22T00:44:06.604120Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2665 Jan 22 00:44:06.604281 waagent[2665]: 2026-01-22T00:44:06.604257Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 22 00:44:06.604516 waagent[2665]: 2026-01-22T00:44:06.604493Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jan 22 00:44:06.605574 waagent[2665]: 2026-01-22T00:44:06.605539Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4515.1.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 22 00:44:06.605909 waagent[2665]: 2026-01-22T00:44:06.605883Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4515.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jan 22 00:44:06.606025 waagent[2665]: 2026-01-22T00:44:06.606004Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jan 22 00:44:06.606411 waagent[2665]: 2026-01-22T00:44:06.606388Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 22 00:44:06.618905 waagent[2665]: 2026-01-22T00:44:06.618879Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 22 00:44:06.619041 waagent[2665]: 2026-01-22T00:44:06.619020Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 22 00:44:06.624765 waagent[2665]: 2026-01-22T00:44:06.624423Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 22 00:44:06.629582 systemd[1]: Reload requested from client PID 2680 ('systemctl') (unit waagent.service)... Jan 22 00:44:06.629595 systemd[1]: Reloading... Jan 22 00:44:06.706770 zram_generator::config[2721]: No configuration found. Jan 22 00:44:06.892050 systemd[1]: Reloading finished in 262 ms. Jan 22 00:44:06.905703 waagent[2665]: 2026-01-22T00:44:06.904463Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 22 00:44:06.905703 waagent[2665]: 2026-01-22T00:44:06.904614Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 22 00:44:07.301669 waagent[2665]: 2026-01-22T00:44:07.301603Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 22 00:44:07.302001 waagent[2665]: 2026-01-22T00:44:07.301970Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jan 22 00:44:07.302695 waagent[2665]: 2026-01-22T00:44:07.302662Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 22 00:44:07.303002 waagent[2665]: 2026-01-22T00:44:07.302966Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 22 00:44:07.303205 waagent[2665]: 2026-01-22T00:44:07.303152Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 22 00:44:07.303306 waagent[2665]: 2026-01-22T00:44:07.303250Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 22 00:44:07.303364 waagent[2665]: 2026-01-22T00:44:07.303341Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 22 00:44:07.303438 waagent[2665]: 2026-01-22T00:44:07.303401Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 22 00:44:07.303514 waagent[2665]: 2026-01-22T00:44:07.303494Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 22 00:44:07.303688 waagent[2665]: 2026-01-22T00:44:07.303667Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 22 00:44:07.303827 waagent[2665]: 2026-01-22T00:44:07.303789Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 22 00:44:07.304052 waagent[2665]: 2026-01-22T00:44:07.304010Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 22 00:44:07.304125 waagent[2665]: 2026-01-22T00:44:07.304110Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 22 00:44:07.304300 waagent[2665]: 2026-01-22T00:44:07.304265Z INFO EnvHandler ExtHandler Configure routes Jan 22 00:44:07.304396 waagent[2665]: 2026-01-22T00:44:07.304372Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 22 00:44:07.304638 waagent[2665]: 2026-01-22T00:44:07.304571Z INFO EnvHandler ExtHandler Gateway:None Jan 22 00:44:07.304694 waagent[2665]: 2026-01-22T00:44:07.304670Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 22 00:44:07.304694 waagent[2665]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 22 00:44:07.304694 waagent[2665]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jan 22 00:44:07.304694 waagent[2665]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 22 00:44:07.304694 waagent[2665]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 22 00:44:07.304694 waagent[2665]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 22 00:44:07.304694 waagent[2665]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 22 00:44:07.305046 waagent[2665]: 2026-01-22T00:44:07.304967Z INFO EnvHandler ExtHandler Routes:None Jan 22 00:44:07.312629 waagent[2665]: 2026-01-22T00:44:07.312594Z INFO ExtHandler ExtHandler Jan 22 00:44:07.312815 waagent[2665]: 2026-01-22T00:44:07.312794Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 4ae1752c-e800-44d4-baa1-9d8877716a30 correlation f1441f51-08d8-48f1-b3ee-114308b56dba created: 2026-01-22T00:42:34.079704Z] Jan 22 00:44:07.313173 waagent[2665]: 2026-01-22T00:44:07.313153Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 22 00:44:07.314969 waagent[2665]: 2026-01-22T00:44:07.314634Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Jan 22 00:44:07.330522 waagent[2665]: 2026-01-22T00:44:07.330467Z INFO MonitorHandler ExtHandler Network interfaces: Jan 22 00:44:07.330522 waagent[2665]: Executing ['ip', '-a', '-o', 'link']: Jan 22 00:44:07.330522 waagent[2665]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 22 00:44:07.330522 waagent[2665]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:2c:40:93 brd ff:ff:ff:ff:ff:ff\ alias Network Device\ altname enx7ced8d2c4093 Jan 22 00:44:07.330522 waagent[2665]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:2c:40:93 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Jan 22 00:44:07.330522 waagent[2665]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 22 00:44:07.330522 waagent[2665]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 22 00:44:07.330522 waagent[2665]: 2: eth0 inet 10.200.8.28/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 22 00:44:07.330522 waagent[2665]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 22 00:44:07.330522 waagent[2665]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 22 00:44:07.330522 waagent[2665]: 2: eth0 inet6 fe80::7eed:8dff:fe2c:4093/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 22 00:44:07.356383 waagent[2665]: 2026-01-22T00:44:07.355818Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jan 22 00:44:07.356383 waagent[2665]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 22 00:44:07.356383 waagent[2665]: pkts bytes target prot opt in out source destination Jan 22 00:44:07.356383 waagent[2665]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 22 00:44:07.356383 waagent[2665]: pkts bytes target prot opt in out source destination Jan 22 00:44:07.356383 waagent[2665]: Chain OUTPUT (policy ACCEPT 4 packets, 408 bytes) Jan 22 00:44:07.356383 waagent[2665]: pkts bytes target prot opt in out source destination Jan 22 00:44:07.356383 waagent[2665]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 22 00:44:07.356383 waagent[2665]: 5 408 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 22 00:44:07.356383 waagent[2665]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 22 00:44:07.358323 waagent[2665]: 2026-01-22T00:44:07.358282Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jan 22 00:44:07.358323 waagent[2665]: Try `iptables -h' or 'iptables --help' for more information.) Jan 22 00:44:07.358663 waagent[2665]: 2026-01-22T00:44:07.358632Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 98073789-9CAC-4287-8FE8-E83EC1EC4B76;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jan 22 00:44:07.359715 waagent[2665]: 2026-01-22T00:44:07.359682Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 22 00:44:07.359715 waagent[2665]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 22 00:44:07.359715 waagent[2665]: pkts bytes target prot opt in out source destination Jan 22 00:44:07.359715 waagent[2665]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 22 00:44:07.359715 waagent[2665]: pkts bytes target prot opt in out source destination Jan 22 00:44:07.359715 waagent[2665]: Chain OUTPUT (policy ACCEPT 4 packets, 408 bytes) Jan 22 00:44:07.359715 waagent[2665]: pkts bytes target prot opt in out source destination Jan 22 00:44:07.359715 waagent[2665]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 22 00:44:07.359715 waagent[2665]: 10 1047 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 22 00:44:07.359715 waagent[2665]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 22 00:44:08.301810 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 22 00:44:08.303253 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 00:44:08.738856 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:44:08.744926 (kubelet)[2822]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 22 00:44:08.778275 kubelet[2822]: E0122 00:44:08.778239 2822 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 22 00:44:08.779746 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 22 00:44:08.779876 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 22 00:44:08.780224 systemd[1]: kubelet.service: Consumed 125ms CPU time, 110.5M memory peak. Jan 22 00:44:18.801809 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 22 00:44:18.804971 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 00:44:19.253788 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:44:19.264932 (kubelet)[2837]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 22 00:44:19.296903 kubelet[2837]: E0122 00:44:19.296866 2837 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 22 00:44:19.298348 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 22 00:44:19.298472 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 22 00:44:19.298825 systemd[1]: kubelet.service: Consumed 122ms CPU time, 110.3M memory peak. Jan 22 00:44:19.470893 update_engine[2428]: I20260122 00:44:19.470813 2428 update_attempter.cc:509] Updating boot flags... Jan 22 00:44:20.083538 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jan 22 00:44:20.474270 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 22 00:44:20.475622 systemd[1]: Started sshd@0-10.200.8.28:22-10.200.16.10:42104.service - OpenSSH per-connection server daemon (10.200.16.10:42104). Jan 22 00:44:21.087841 sshd[2868]: Accepted publickey for core from 10.200.16.10 port 42104 ssh2: RSA SHA256:hQipMGMdtaSZ7b92HZmOgUPWWHTKhAP4uTxbuEjU9iU Jan 22 00:44:21.088947 sshd-session[2868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:44:21.093612 systemd-logind[2425]: New session 3 of user core. Jan 22 00:44:21.100891 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 22 00:44:21.531439 systemd[1]: Started sshd@1-10.200.8.28:22-10.200.16.10:42108.service - OpenSSH per-connection server daemon (10.200.16.10:42108). Jan 22 00:44:22.098966 sshd[2874]: Accepted publickey for core from 10.200.16.10 port 42108 ssh2: RSA SHA256:hQipMGMdtaSZ7b92HZmOgUPWWHTKhAP4uTxbuEjU9iU Jan 22 00:44:22.100151 sshd-session[2874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:44:22.104688 systemd-logind[2425]: New session 4 of user core. Jan 22 00:44:22.115897 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 22 00:44:22.421710 sshd[2877]: Connection closed by 10.200.16.10 port 42108 Jan 22 00:44:22.422251 sshd-session[2874]: pam_unix(sshd:session): session closed for user core Jan 22 00:44:22.425981 systemd-logind[2425]: Session 4 logged out. Waiting for processes to exit. Jan 22 00:44:22.426156 systemd[1]: sshd@1-10.200.8.28:22-10.200.16.10:42108.service: Deactivated successfully. Jan 22 00:44:22.427863 systemd[1]: session-4.scope: Deactivated successfully. Jan 22 00:44:22.429293 systemd-logind[2425]: Removed session 4. Jan 22 00:44:22.550462 systemd[1]: Started sshd@2-10.200.8.28:22-10.200.16.10:42110.service - OpenSSH per-connection server daemon (10.200.16.10:42110). Jan 22 00:44:23.128442 sshd[2883]: Accepted publickey for core from 10.200.16.10 port 42110 ssh2: RSA SHA256:hQipMGMdtaSZ7b92HZmOgUPWWHTKhAP4uTxbuEjU9iU Jan 22 00:44:23.130636 sshd-session[2883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:44:23.135132 systemd-logind[2425]: New session 5 of user core. Jan 22 00:44:23.143924 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 22 00:44:23.448906 sshd[2886]: Connection closed by 10.200.16.10 port 42110 Jan 22 00:44:23.449399 sshd-session[2883]: pam_unix(sshd:session): session closed for user core Jan 22 00:44:23.452667 systemd[1]: sshd@2-10.200.8.28:22-10.200.16.10:42110.service: Deactivated successfully. Jan 22 00:44:23.454232 systemd[1]: session-5.scope: Deactivated successfully. Jan 22 00:44:23.455476 systemd-logind[2425]: Session 5 logged out. Waiting for processes to exit. Jan 22 00:44:23.456430 systemd-logind[2425]: Removed session 5. Jan 22 00:44:23.565420 systemd[1]: Started sshd@3-10.200.8.28:22-10.200.16.10:42112.service - OpenSSH per-connection server daemon (10.200.16.10:42112). Jan 22 00:44:24.132778 sshd[2892]: Accepted publickey for core from 10.200.16.10 port 42112 ssh2: RSA SHA256:hQipMGMdtaSZ7b92HZmOgUPWWHTKhAP4uTxbuEjU9iU Jan 22 00:44:24.133363 sshd-session[2892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:44:24.138007 systemd-logind[2425]: New session 6 of user core. Jan 22 00:44:24.144913 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 22 00:44:24.454636 sshd[2895]: Connection closed by 10.200.16.10 port 42112 Jan 22 00:44:24.455182 sshd-session[2892]: pam_unix(sshd:session): session closed for user core Jan 22 00:44:24.458056 systemd[1]: sshd@3-10.200.8.28:22-10.200.16.10:42112.service: Deactivated successfully. Jan 22 00:44:24.459761 systemd[1]: session-6.scope: Deactivated successfully. Jan 22 00:44:24.461590 systemd-logind[2425]: Session 6 logged out. Waiting for processes to exit. Jan 22 00:44:24.462277 systemd-logind[2425]: Removed session 6. Jan 22 00:44:24.583409 systemd[1]: Started sshd@4-10.200.8.28:22-10.200.16.10:42116.service - OpenSSH per-connection server daemon (10.200.16.10:42116). Jan 22 00:44:25.158178 sshd[2901]: Accepted publickey for core from 10.200.16.10 port 42116 ssh2: RSA SHA256:hQipMGMdtaSZ7b92HZmOgUPWWHTKhAP4uTxbuEjU9iU Jan 22 00:44:25.159277 sshd-session[2901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:44:25.163592 systemd-logind[2425]: New session 7 of user core. Jan 22 00:44:25.169899 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 22 00:44:25.417183 sudo[2905]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 22 00:44:25.417416 sudo[2905]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 22 00:44:25.434468 sudo[2905]: pam_unix(sudo:session): session closed for user root Jan 22 00:44:25.541122 sshd[2904]: Connection closed by 10.200.16.10 port 42116 Jan 22 00:44:25.541986 sshd-session[2901]: pam_unix(sshd:session): session closed for user core Jan 22 00:44:25.545137 systemd[1]: sshd@4-10.200.8.28:22-10.200.16.10:42116.service: Deactivated successfully. Jan 22 00:44:25.547058 systemd[1]: session-7.scope: Deactivated successfully. Jan 22 00:44:25.548303 systemd-logind[2425]: Session 7 logged out. Waiting for processes to exit. Jan 22 00:44:25.549611 systemd-logind[2425]: Removed session 7. Jan 22 00:44:25.659599 systemd[1]: Started sshd@5-10.200.8.28:22-10.200.16.10:42126.service - OpenSSH per-connection server daemon (10.200.16.10:42126). Jan 22 00:44:26.247294 sshd[2911]: Accepted publickey for core from 10.200.16.10 port 42126 ssh2: RSA SHA256:hQipMGMdtaSZ7b92HZmOgUPWWHTKhAP4uTxbuEjU9iU Jan 22 00:44:26.248517 sshd-session[2911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:44:26.253100 systemd-logind[2425]: New session 8 of user core. Jan 22 00:44:26.259918 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 22 00:44:26.465949 sudo[2916]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 22 00:44:26.466167 sudo[2916]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 22 00:44:26.472653 sudo[2916]: pam_unix(sudo:session): session closed for user root Jan 22 00:44:26.477571 sudo[2915]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 22 00:44:26.477813 sudo[2915]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 22 00:44:26.485670 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 22 00:44:26.511000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 22 00:44:26.513417 kernel: kauditd_printk_skb: 194 callbacks suppressed Jan 22 00:44:26.513464 kernel: audit: type=1305 audit(1769042666.511:241): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 22 00:44:26.511000 audit[2938]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffffb8add60 a2=420 a3=0 items=0 ppid=2919 pid=2938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:26.515891 augenrules[2938]: No rules Jan 22 00:44:26.517088 systemd[1]: audit-rules.service: Deactivated successfully. Jan 22 00:44:26.517440 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 22 00:44:26.519035 sudo[2915]: pam_unix(sudo:session): session closed for user root Jan 22 00:44:26.521680 kernel: audit: type=1300 audit(1769042666.511:241): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffffb8add60 a2=420 a3=0 items=0 ppid=2919 pid=2938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:26.511000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 22 00:44:26.523759 kernel: audit: type=1327 audit(1769042666.511:241): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 22 00:44:26.523800 kernel: audit: type=1130 audit(1769042666.514:242): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:44:26.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:44:26.529867 kernel: audit: type=1131 audit(1769042666.514:243): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:44:26.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:44:26.514000 audit[2915]: USER_END pid=2915 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:44:26.532972 kernel: audit: type=1106 audit(1769042666.514:244): pid=2915 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:44:26.514000 audit[2915]: CRED_DISP pid=2915 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:44:26.535873 kernel: audit: type=1104 audit(1769042666.514:245): pid=2915 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:44:26.627676 sshd[2914]: Connection closed by 10.200.16.10 port 42126 Jan 22 00:44:26.628921 sshd-session[2911]: pam_unix(sshd:session): session closed for user core Jan 22 00:44:26.629000 audit[2911]: USER_END pid=2911 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:44:26.636053 kernel: audit: type=1106 audit(1769042666.629:246): pid=2911 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:44:26.636115 kernel: audit: type=1104 audit(1769042666.629:247): pid=2911 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:44:26.629000 audit[2911]: CRED_DISP pid=2911 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:44:26.635811 systemd[1]: sshd@5-10.200.8.28:22-10.200.16.10:42126.service: Deactivated successfully. Jan 22 00:44:26.639900 kernel: audit: type=1131 audit(1769042666.635:248): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.8.28:22-10.200.16.10:42126 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:44:26.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.8.28:22-10.200.16.10:42126 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:44:26.639600 systemd[1]: session-8.scope: Deactivated successfully. Jan 22 00:44:26.641568 systemd-logind[2425]: Session 8 logged out. Waiting for processes to exit. Jan 22 00:44:26.642610 systemd-logind[2425]: Removed session 8. Jan 22 00:44:26.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.28:22-10.200.16.10:42136 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:44:26.771330 systemd[1]: Started sshd@6-10.200.8.28:22-10.200.16.10:42136.service - OpenSSH per-connection server daemon (10.200.16.10:42136). Jan 22 00:44:27.337000 audit[2947]: USER_ACCT pid=2947 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:44:27.338890 sshd[2947]: Accepted publickey for core from 10.200.16.10 port 42136 ssh2: RSA SHA256:hQipMGMdtaSZ7b92HZmOgUPWWHTKhAP4uTxbuEjU9iU Jan 22 00:44:27.338000 audit[2947]: CRED_ACQ pid=2947 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:44:27.338000 audit[2947]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe25e25640 a2=3 a3=0 items=0 ppid=1 pid=2947 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:27.338000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:44:27.340023 sshd-session[2947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:44:27.344605 systemd-logind[2425]: New session 9 of user core. Jan 22 00:44:27.353915 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 22 00:44:27.355000 audit[2947]: USER_START pid=2947 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:44:27.356000 audit[2950]: CRED_ACQ pid=2950 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:44:27.556000 audit[2951]: USER_ACCT pid=2951 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:44:27.557600 sudo[2951]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 22 00:44:27.556000 audit[2951]: CRED_REFR pid=2951 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:44:27.557851 sudo[2951]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 22 00:44:27.558000 audit[2951]: USER_START pid=2951 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:44:28.061579 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 22 00:44:28.074009 (dockerd)[2969]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 22 00:44:28.449835 dockerd[2969]: time="2026-01-22T00:44:28.449582889Z" level=info msg="Starting up" Jan 22 00:44:28.450757 dockerd[2969]: time="2026-01-22T00:44:28.450617344Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 22 00:44:28.459657 dockerd[2969]: time="2026-01-22T00:44:28.459618183Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 22 00:44:28.487392 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3206888311-merged.mount: Deactivated successfully. Jan 22 00:44:28.608067 dockerd[2969]: time="2026-01-22T00:44:28.607860600Z" level=info msg="Loading containers: start." Jan 22 00:44:28.624759 kernel: Initializing XFRM netlink socket Jan 22 00:44:28.654000 audit[3015]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=3015 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:28.654000 audit[3015]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffebe2159e0 a2=0 a3=0 items=0 ppid=2969 pid=3015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:28.654000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 22 00:44:28.656000 audit[3017]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=3017 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:28.656000 audit[3017]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe10601c50 a2=0 a3=0 items=0 ppid=2969 pid=3017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:28.656000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 22 00:44:28.657000 audit[3019]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=3019 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:28.657000 audit[3019]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcc3527190 a2=0 a3=0 items=0 ppid=2969 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:28.657000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 22 00:44:28.659000 audit[3021]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=3021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:28.659000 audit[3021]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcae48ace0 a2=0 a3=0 items=0 ppid=2969 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:28.659000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 22 00:44:28.660000 audit[3023]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_chain pid=3023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:28.660000 audit[3023]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc1480a7b0 a2=0 a3=0 items=0 ppid=2969 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:28.660000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 22 00:44:28.662000 audit[3025]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_chain pid=3025 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:28.662000 audit[3025]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd84d9b870 a2=0 a3=0 items=0 ppid=2969 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:28.662000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 22 00:44:28.664000 audit[3027]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=3027 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:28.664000 audit[3027]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff52e91670 a2=0 a3=0 items=0 ppid=2969 pid=3027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:28.664000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 22 00:44:28.666000 audit[3029]: NETFILTER_CFG table=nat:12 family=2 entries=2 op=nft_register_chain pid=3029 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:28.666000 audit[3029]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffd727215f0 a2=0 a3=0 items=0 ppid=2969 pid=3029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:28.666000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 22 00:44:28.687000 audit[3032]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=3032 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:28.687000 audit[3032]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7fff83b7d520 a2=0 a3=0 items=0 ppid=2969 pid=3032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:28.687000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 22 00:44:28.689000 audit[3034]: NETFILTER_CFG table=filter:14 family=2 entries=2 op=nft_register_chain pid=3034 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:28.689000 audit[3034]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffdc10b97a0 a2=0 a3=0 items=0 ppid=2969 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:28.689000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 22 00:44:28.691000 audit[3036]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=3036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:28.691000 audit[3036]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff2497e370 a2=0 a3=0 items=0 ppid=2969 pid=3036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:28.691000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 22 00:44:28.693000 audit[3038]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=3038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:28.693000 audit[3038]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffe18954940 a2=0 a3=0 items=0 ppid=2969 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:28.693000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 22 00:44:28.694000 audit[3040]: NETFILTER_CFG table=filter:17 family=2 entries=1 op=nft_register_rule pid=3040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:28.694000 audit[3040]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffc2883f3d0 a2=0 a3=0 items=0 ppid=2969 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:28.694000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 22 00:44:28.731000 audit[3070]: NETFILTER_CFG table=nat:18 family=10 entries=2 op=nft_register_chain pid=3070 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:28.731000 audit[3070]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffdf665d490 a2=0 a3=0 items=0 ppid=2969 pid=3070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:28.731000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 22 00:44:28.733000 audit[3072]: NETFILTER_CFG table=filter:19 family=10 entries=2 op=nft_register_chain pid=3072 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:28.733000 audit[3072]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffee5ed93b0 a2=0 a3=0 items=0 ppid=2969 pid=3072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:28.733000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 22 00:44:28.735000 audit[3074]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=3074 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:28.735000 audit[3074]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffe73cd700 a2=0 a3=0 items=0 ppid=2969 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:28.735000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 22 00:44:28.737000 audit[3076]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=3076 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:28.737000 audit[3076]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff8967a9c0 a2=0 a3=0 items=0 ppid=2969 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:28.737000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 22 00:44:28.739000 audit[3078]: NETFILTER_CFG table=filter:22 family=10 entries=1 op=nft_register_chain pid=3078 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:28.739000 audit[3078]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe8685f650 a2=0 a3=0 items=0 ppid=2969 pid=3078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:28.739000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 22 00:44:28.741000 audit[3080]: NETFILTER_CFG table=filter:23 family=10 entries=1 op=nft_register_chain pid=3080 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:28.741000 audit[3080]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff84210ed0 a2=0 a3=0 items=0 ppid=2969 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:28.741000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 22 00:44:28.742000 audit[3082]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_chain pid=3082 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:28.742000 audit[3082]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff19dc1520 a2=0 a3=0 items=0 ppid=2969 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:28.742000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 22 00:44:28.744000 audit[3084]: NETFILTER_CFG table=nat:25 family=10 entries=2 op=nft_register_chain pid=3084 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:28.744000 audit[3084]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffdbd82fa20 a2=0 a3=0 items=0 ppid=2969 pid=3084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:28.744000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 22 00:44:28.746000 audit[3086]: NETFILTER_CFG table=nat:26 family=10 entries=2 op=nft_register_chain pid=3086 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:28.746000 audit[3086]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffd5ec41da0 a2=0 a3=0 items=0 ppid=2969 pid=3086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:28.746000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 22 00:44:28.748000 audit[3088]: NETFILTER_CFG table=filter:27 family=10 entries=2 op=nft_register_chain pid=3088 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:28.748000 audit[3088]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fffd88f79c0 a2=0 a3=0 items=0 ppid=2969 pid=3088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:28.748000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 22 00:44:28.750000 audit[3090]: NETFILTER_CFG table=filter:28 family=10 entries=1 op=nft_register_rule pid=3090 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:28.750000 audit[3090]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fffd7e58dc0 a2=0 a3=0 items=0 ppid=2969 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:28.750000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 22 00:44:28.752000 audit[3092]: NETFILTER_CFG table=filter:29 family=10 entries=1 op=nft_register_rule pid=3092 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:28.752000 audit[3092]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffde0040c20 a2=0 a3=0 items=0 ppid=2969 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:28.752000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 22 00:44:28.753000 audit[3094]: NETFILTER_CFG table=filter:30 family=10 entries=1 op=nft_register_rule pid=3094 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:28.753000 audit[3094]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffeaecb5910 a2=0 a3=0 items=0 ppid=2969 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:28.753000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 22 00:44:28.757000 audit[3099]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_chain pid=3099 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:28.757000 audit[3099]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffce8a8a8e0 a2=0 a3=0 items=0 ppid=2969 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:28.757000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 22 00:44:28.759000 audit[3101]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=3101 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:28.759000 audit[3101]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffdb39585f0 a2=0 a3=0 items=0 ppid=2969 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:28.759000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 22 00:44:28.761000 audit[3103]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=3103 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:28.761000 audit[3103]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff3b989b20 a2=0 a3=0 items=0 ppid=2969 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:28.761000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 22 00:44:28.762000 audit[3105]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=3105 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:28.762000 audit[3105]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcbf55ce90 a2=0 a3=0 items=0 ppid=2969 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:28.762000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 22 00:44:28.764000 audit[3107]: NETFILTER_CFG table=filter:35 family=10 entries=1 op=nft_register_rule pid=3107 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:28.764000 audit[3107]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffed1e1c4c0 a2=0 a3=0 items=0 ppid=2969 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:28.764000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 22 00:44:28.766000 audit[3109]: NETFILTER_CFG table=filter:36 family=10 entries=1 op=nft_register_rule pid=3109 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:28.766000 audit[3109]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fffb5ff1eb0 a2=0 a3=0 items=0 ppid=2969 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:28.766000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 22 00:44:28.796000 audit[3114]: NETFILTER_CFG table=nat:37 family=2 entries=2 op=nft_register_chain pid=3114 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:28.796000 audit[3114]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffec9b45b80 a2=0 a3=0 items=0 ppid=2969 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:28.796000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 22 00:44:28.798000 audit[3116]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=3116 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:28.798000 audit[3116]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffebbaed560 a2=0 a3=0 items=0 ppid=2969 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:28.798000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 22 00:44:28.805000 audit[3124]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=3124 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:28.805000 audit[3124]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffec4013900 a2=0 a3=0 items=0 ppid=2969 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:28.805000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 22 00:44:28.809000 audit[3129]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=3129 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:28.809000 audit[3129]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc4d8e9320 a2=0 a3=0 items=0 ppid=2969 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:28.809000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 22 00:44:28.811000 audit[3131]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=3131 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:28.811000 audit[3131]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffe35836b90 a2=0 a3=0 items=0 ppid=2969 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:28.811000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 22 00:44:28.813000 audit[3133]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=3133 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:28.813000 audit[3133]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffde5b4ceb0 a2=0 a3=0 items=0 ppid=2969 pid=3133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:28.813000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 22 00:44:28.814000 audit[3135]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_rule pid=3135 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:28.814000 audit[3135]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffe9dba1080 a2=0 a3=0 items=0 ppid=2969 pid=3135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:28.814000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 22 00:44:28.816000 audit[3137]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_rule pid=3137 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:28.816000 audit[3137]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcc5d35120 a2=0 a3=0 items=0 ppid=2969 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:28.816000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 22 00:44:28.817729 systemd-networkd[2242]: docker0: Link UP Jan 22 00:44:28.830533 dockerd[2969]: time="2026-01-22T00:44:28.830505319Z" level=info msg="Loading containers: done." Jan 22 00:44:28.841333 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1583274612-merged.mount: Deactivated successfully. Jan 22 00:44:28.883830 dockerd[2969]: time="2026-01-22T00:44:28.883773849Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 22 00:44:28.883949 dockerd[2969]: time="2026-01-22T00:44:28.883875868Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 22 00:44:28.883982 dockerd[2969]: time="2026-01-22T00:44:28.883954415Z" level=info msg="Initializing buildkit" Jan 22 00:44:28.930209 dockerd[2969]: time="2026-01-22T00:44:28.930180645Z" level=info msg="Completed buildkit initialization" Jan 22 00:44:28.936246 dockerd[2969]: time="2026-01-22T00:44:28.936219854Z" level=info msg="Daemon has completed initialization" Jan 22 00:44:28.936404 dockerd[2969]: time="2026-01-22T00:44:28.936315863Z" level=info msg="API listen on /run/docker.sock" Jan 22 00:44:28.936485 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 22 00:44:28.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:44:29.301650 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 22 00:44:29.303526 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 00:44:29.745868 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:44:29.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:44:29.755930 (kubelet)[3184]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 22 00:44:29.788282 kubelet[3184]: E0122 00:44:29.788233 3184 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 22 00:44:29.789765 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 22 00:44:29.789892 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 22 00:44:29.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 22 00:44:29.790274 systemd[1]: kubelet.service: Consumed 134ms CPU time, 110.1M memory peak. Jan 22 00:44:30.220607 containerd[2460]: time="2026-01-22T00:44:30.220567663Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 22 00:44:31.099115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2273545570.mount: Deactivated successfully. Jan 22 00:44:32.233619 containerd[2460]: time="2026-01-22T00:44:32.233564197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:44:32.239262 containerd[2460]: time="2026-01-22T00:44:32.239121501Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=28095080" Jan 22 00:44:32.242749 containerd[2460]: time="2026-01-22T00:44:32.242714260Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:44:32.246520 containerd[2460]: time="2026-01-22T00:44:32.246490163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:44:32.247246 containerd[2460]: time="2026-01-22T00:44:32.247221316Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 2.026612952s" Jan 22 00:44:32.247327 containerd[2460]: time="2026-01-22T00:44:32.247315871Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 22 00:44:32.247887 containerd[2460]: time="2026-01-22T00:44:32.247852018Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 22 00:44:33.808275 containerd[2460]: time="2026-01-22T00:44:33.808226963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:44:33.810802 containerd[2460]: time="2026-01-22T00:44:33.810774550Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24987951" Jan 22 00:44:33.813587 containerd[2460]: time="2026-01-22T00:44:33.813543815Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:44:33.827058 containerd[2460]: time="2026-01-22T00:44:33.826994615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:44:33.827844 containerd[2460]: time="2026-01-22T00:44:33.827717302Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.579826357s" Jan 22 00:44:33.827844 containerd[2460]: time="2026-01-22T00:44:33.827760130Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 22 00:44:33.828399 containerd[2460]: time="2026-01-22T00:44:33.828377016Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 22 00:44:35.250010 containerd[2460]: time="2026-01-22T00:44:35.249960871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:44:35.252619 containerd[2460]: time="2026-01-22T00:44:35.252582488Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19396939" Jan 22 00:44:35.255552 containerd[2460]: time="2026-01-22T00:44:35.255512340Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:44:35.259361 containerd[2460]: time="2026-01-22T00:44:35.259315538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:44:35.260074 containerd[2460]: time="2026-01-22T00:44:35.259962380Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.431558956s" Jan 22 00:44:35.260074 containerd[2460]: time="2026-01-22T00:44:35.259990548Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 22 00:44:35.260563 containerd[2460]: time="2026-01-22T00:44:35.260530973Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 22 00:44:36.088962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2979642851.mount: Deactivated successfully. Jan 22 00:44:36.478907 containerd[2460]: time="2026-01-22T00:44:36.478668717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:44:36.481439 containerd[2460]: time="2026-01-22T00:44:36.481349611Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=0" Jan 22 00:44:36.484040 containerd[2460]: time="2026-01-22T00:44:36.484014175Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:44:36.487208 containerd[2460]: time="2026-01-22T00:44:36.487156991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:44:36.487617 containerd[2460]: time="2026-01-22T00:44:36.487460216Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 1.226898417s" Jan 22 00:44:36.487617 containerd[2460]: time="2026-01-22T00:44:36.487489224Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 22 00:44:36.488087 containerd[2460]: time="2026-01-22T00:44:36.488064290Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 22 00:44:37.194550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3861488798.mount: Deactivated successfully. Jan 22 00:44:38.141582 containerd[2460]: time="2026-01-22T00:44:38.141528589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:44:38.144310 containerd[2460]: time="2026-01-22T00:44:38.144275604Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=17569900" Jan 22 00:44:38.147010 containerd[2460]: time="2026-01-22T00:44:38.146956914Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:44:38.151492 containerd[2460]: time="2026-01-22T00:44:38.151447811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:44:38.152496 containerd[2460]: time="2026-01-22T00:44:38.152060326Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.66388133s" Jan 22 00:44:38.152496 containerd[2460]: time="2026-01-22T00:44:38.152092271Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 22 00:44:38.152761 containerd[2460]: time="2026-01-22T00:44:38.152742144Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 22 00:44:38.695566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount520259712.mount: Deactivated successfully. Jan 22 00:44:38.715754 containerd[2460]: time="2026-01-22T00:44:38.715708735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 22 00:44:38.718436 containerd[2460]: time="2026-01-22T00:44:38.718403461Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 22 00:44:38.721757 containerd[2460]: time="2026-01-22T00:44:38.721706182Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 22 00:44:38.726063 containerd[2460]: time="2026-01-22T00:44:38.726019460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 22 00:44:38.726699 containerd[2460]: time="2026-01-22T00:44:38.726434193Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 573.667341ms" Jan 22 00:44:38.726699 containerd[2460]: time="2026-01-22T00:44:38.726461878Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 22 00:44:38.726964 containerd[2460]: time="2026-01-22T00:44:38.726936594Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 22 00:44:39.460790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4159821224.mount: Deactivated successfully. Jan 22 00:44:39.801799 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 22 00:44:39.803973 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 00:44:40.267914 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:44:40.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:44:40.269516 kernel: kauditd_printk_skb: 134 callbacks suppressed Jan 22 00:44:40.269590 kernel: audit: type=1130 audit(1769042680.266:301): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:44:40.280971 (kubelet)[3359]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 22 00:44:40.322274 kubelet[3359]: E0122 00:44:40.322174 3359 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 22 00:44:40.324160 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 22 00:44:40.324291 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 22 00:44:40.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 22 00:44:40.324635 systemd[1]: kubelet.service: Consumed 143ms CPU time, 110.3M memory peak. Jan 22 00:44:40.328754 kernel: audit: type=1131 audit(1769042680.322:302): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 22 00:44:41.821269 containerd[2460]: time="2026-01-22T00:44:41.821222054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:44:41.823750 containerd[2460]: time="2026-01-22T00:44:41.823701651Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=56280151" Jan 22 00:44:41.827700 containerd[2460]: time="2026-01-22T00:44:41.827463948Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:44:41.837459 containerd[2460]: time="2026-01-22T00:44:41.837428750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:44:41.838159 containerd[2460]: time="2026-01-22T00:44:41.838135861Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.111169955s" Jan 22 00:44:41.838246 containerd[2460]: time="2026-01-22T00:44:41.838233337Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 22 00:44:43.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:44:43.928399 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:44:43.928574 systemd[1]: kubelet.service: Consumed 143ms CPU time, 110.3M memory peak. Jan 22 00:44:43.934756 kernel: audit: type=1130 audit(1769042683.926:303): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:44:43.932979 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 00:44:43.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:44:43.940763 kernel: audit: type=1131 audit(1769042683.926:304): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:44:43.963135 systemd[1]: Reload requested from client PID 3419 ('systemctl') (unit session-9.scope)... Jan 22 00:44:43.963148 systemd[1]: Reloading... Jan 22 00:44:44.067774 zram_generator::config[3469]: No configuration found. Jan 22 00:44:44.253505 systemd[1]: Reloading finished in 290 ms. Jan 22 00:44:44.277796 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 22 00:44:44.277869 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 22 00:44:44.278122 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:44:44.278170 systemd[1]: kubelet.service: Consumed 66ms CPU time, 64.7M memory peak. Jan 22 00:44:44.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 22 00:44:44.282991 kernel: audit: type=1130 audit(1769042684.276:305): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 22 00:44:44.283033 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 00:44:44.282000 audit: BPF prog-id=86 op=LOAD Jan 22 00:44:44.288592 kernel: audit: type=1334 audit(1769042684.282:306): prog-id=86 op=LOAD Jan 22 00:44:44.288644 kernel: audit: type=1334 audit(1769042684.282:307): prog-id=85 op=UNLOAD Jan 22 00:44:44.282000 audit: BPF prog-id=85 op=UNLOAD Jan 22 00:44:44.288762 kernel: audit: type=1334 audit(1769042684.282:308): prog-id=87 op=LOAD Jan 22 00:44:44.282000 audit: BPF prog-id=87 op=LOAD Jan 22 00:44:44.282000 audit: BPF prog-id=88 op=LOAD Jan 22 00:44:44.290324 kernel: audit: type=1334 audit(1769042684.282:309): prog-id=88 op=LOAD Jan 22 00:44:44.290762 kernel: audit: type=1334 audit(1769042684.282:310): prog-id=76 op=UNLOAD Jan 22 00:44:44.282000 audit: BPF prog-id=76 op=UNLOAD Jan 22 00:44:44.282000 audit: BPF prog-id=77 op=UNLOAD Jan 22 00:44:44.283000 audit: BPF prog-id=89 op=LOAD Jan 22 00:44:44.283000 audit: BPF prog-id=70 op=UNLOAD Jan 22 00:44:44.283000 audit: BPF prog-id=90 op=LOAD Jan 22 00:44:44.283000 audit: BPF prog-id=91 op=LOAD Jan 22 00:44:44.283000 audit: BPF prog-id=71 op=UNLOAD Jan 22 00:44:44.283000 audit: BPF prog-id=72 op=UNLOAD Jan 22 00:44:44.292000 audit: BPF prog-id=92 op=LOAD Jan 22 00:44:44.292000 audit: BPF prog-id=78 op=UNLOAD Jan 22 00:44:44.292000 audit: BPF prog-id=93 op=LOAD Jan 22 00:44:44.292000 audit: BPF prog-id=94 op=LOAD Jan 22 00:44:44.292000 audit: BPF prog-id=79 op=UNLOAD Jan 22 00:44:44.292000 audit: BPF prog-id=80 op=UNLOAD Jan 22 00:44:44.293000 audit: BPF prog-id=95 op=LOAD Jan 22 00:44:44.293000 audit: BPF prog-id=66 op=UNLOAD Jan 22 00:44:44.294000 audit: BPF prog-id=96 op=LOAD Jan 22 00:44:44.294000 audit: BPF prog-id=73 op=UNLOAD Jan 22 00:44:44.294000 audit: BPF prog-id=97 op=LOAD Jan 22 00:44:44.294000 audit: BPF prog-id=98 op=LOAD Jan 22 00:44:44.294000 audit: BPF prog-id=74 op=UNLOAD Jan 22 00:44:44.294000 audit: BPF prog-id=75 op=UNLOAD Jan 22 00:44:44.294000 audit: BPF prog-id=99 op=LOAD Jan 22 00:44:44.294000 audit: BPF prog-id=81 op=UNLOAD Jan 22 00:44:44.295000 audit: BPF prog-id=100 op=LOAD Jan 22 00:44:44.295000 audit: BPF prog-id=67 op=UNLOAD Jan 22 00:44:44.296000 audit: BPF prog-id=101 op=LOAD Jan 22 00:44:44.296000 audit: BPF prog-id=102 op=LOAD Jan 22 00:44:44.296000 audit: BPF prog-id=68 op=UNLOAD Jan 22 00:44:44.296000 audit: BPF prog-id=69 op=UNLOAD Jan 22 00:44:44.296000 audit: BPF prog-id=103 op=LOAD Jan 22 00:44:44.296000 audit: BPF prog-id=82 op=UNLOAD Jan 22 00:44:44.296000 audit: BPF prog-id=104 op=LOAD Jan 22 00:44:44.296000 audit: BPF prog-id=105 op=LOAD Jan 22 00:44:44.296000 audit: BPF prog-id=83 op=UNLOAD Jan 22 00:44:44.296000 audit: BPF prog-id=84 op=UNLOAD Jan 22 00:44:44.797939 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:44:44.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:44:44.801835 (kubelet)[3536]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 22 00:44:44.844175 kubelet[3536]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 00:44:44.844175 kubelet[3536]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 22 00:44:44.844175 kubelet[3536]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 00:44:44.844447 kubelet[3536]: I0122 00:44:44.844224 3536 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 22 00:44:45.247010 kubelet[3536]: I0122 00:44:45.246973 3536 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 22 00:44:45.247010 kubelet[3536]: I0122 00:44:45.246999 3536 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 22 00:44:45.247810 kubelet[3536]: I0122 00:44:45.247514 3536 server.go:954] "Client rotation is on, will bootstrap in background" Jan 22 00:44:45.277279 kubelet[3536]: E0122 00:44:45.277250 3536 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.28:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.28:6443: connect: connection refused" logger="UnhandledError" Jan 22 00:44:45.278175 kubelet[3536]: I0122 00:44:45.278157 3536 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 22 00:44:45.283528 kubelet[3536]: I0122 00:44:45.283509 3536 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 22 00:44:45.285558 kubelet[3536]: I0122 00:44:45.285542 3536 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 22 00:44:45.287148 kubelet[3536]: I0122 00:44:45.286796 3536 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 22 00:44:45.287148 kubelet[3536]: I0122 00:44:45.286825 3536 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515.1.0-n-d879fbfda5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 22 00:44:45.287148 kubelet[3536]: I0122 00:44:45.286948 3536 topology_manager.go:138] "Creating topology manager with none policy" Jan 22 00:44:45.287148 kubelet[3536]: I0122 00:44:45.286956 3536 container_manager_linux.go:304] "Creating device plugin manager" Jan 22 00:44:45.287327 kubelet[3536]: I0122 00:44:45.287037 3536 state_mem.go:36] "Initialized new in-memory state store" Jan 22 00:44:45.290270 kubelet[3536]: I0122 00:44:45.290252 3536 kubelet.go:446] "Attempting to sync node with API server" Jan 22 00:44:45.290323 kubelet[3536]: I0122 00:44:45.290281 3536 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 22 00:44:45.290323 kubelet[3536]: I0122 00:44:45.290309 3536 kubelet.go:352] "Adding apiserver pod source" Jan 22 00:44:45.290323 kubelet[3536]: I0122 00:44:45.290321 3536 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 22 00:44:45.299476 kubelet[3536]: I0122 00:44:45.299012 3536 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 22 00:44:45.299476 kubelet[3536]: I0122 00:44:45.299378 3536 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 22 00:44:45.299969 kubelet[3536]: W0122 00:44:45.299958 3536 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 22 00:44:45.301626 kubelet[3536]: I0122 00:44:45.301614 3536 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 22 00:44:45.301702 kubelet[3536]: I0122 00:44:45.301698 3536 server.go:1287] "Started kubelet" Jan 22 00:44:45.301929 kubelet[3536]: W0122 00:44:45.301897 3536 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.28:6443: connect: connection refused Jan 22 00:44:45.302001 kubelet[3536]: E0122 00:44:45.301988 3536 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.28:6443: connect: connection refused" logger="UnhandledError" Jan 22 00:44:45.302115 kubelet[3536]: W0122 00:44:45.302094 3536 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515.1.0-n-d879fbfda5&limit=500&resourceVersion=0": dial tcp 10.200.8.28:6443: connect: connection refused Jan 22 00:44:45.302166 kubelet[3536]: E0122 00:44:45.302156 3536 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515.1.0-n-d879fbfda5&limit=500&resourceVersion=0\": dial tcp 10.200.8.28:6443: connect: connection refused" logger="UnhandledError" Jan 22 00:44:45.308098 kubelet[3536]: I0122 00:44:45.307408 3536 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 22 00:44:45.308098 kubelet[3536]: I0122 00:44:45.307705 3536 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 22 00:44:45.310119 kubelet[3536]: I0122 00:44:45.309514 3536 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 22 00:44:45.310258 kubelet[3536]: E0122 00:44:45.308945 3536 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.28:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.28:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4515.1.0-n-d879fbfda5.188ce6f311e8fab2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4515.1.0-n-d879fbfda5,UID:ci-4515.1.0-n-d879fbfda5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4515.1.0-n-d879fbfda5,},FirstTimestamp:2026-01-22 00:44:45.301684914 +0000 UTC m=+0.496145187,LastTimestamp:2026-01-22 00:44:45.301684914 +0000 UTC m=+0.496145187,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515.1.0-n-d879fbfda5,}" Jan 22 00:44:45.311811 kubelet[3536]: I0122 00:44:45.311787 3536 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 22 00:44:45.312204 kubelet[3536]: I0122 00:44:45.312192 3536 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 22 00:44:45.313214 kubelet[3536]: E0122 00:44:45.313197 3536 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-n-d879fbfda5\" not found" Jan 22 00:44:45.318105 kernel: kauditd_printk_skb: 36 callbacks suppressed Jan 22 00:44:45.318171 kernel: audit: type=1325 audit(1769042685.311:347): table=mangle:45 family=2 entries=2 op=nft_register_chain pid=3548 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:45.311000 audit[3548]: NETFILTER_CFG table=mangle:45 family=2 entries=2 op=nft_register_chain pid=3548 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:45.311000 audit[3548]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffeaf0d4330 a2=0 a3=0 items=0 ppid=3536 pid=3548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.323902 kubelet[3536]: I0122 00:44:45.313647 3536 server.go:479] "Adding debug handlers to kubelet server" Jan 22 00:44:45.323902 kubelet[3536]: I0122 00:44:45.314684 3536 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 22 00:44:45.323902 kubelet[3536]: I0122 00:44:45.321868 3536 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 22 00:44:45.323902 kubelet[3536]: I0122 00:44:45.321914 3536 reconciler.go:26] "Reconciler: start to sync state" Jan 22 00:44:45.325150 kubelet[3536]: W0122 00:44:45.324904 3536 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.28:6443: connect: connection refused Jan 22 00:44:45.325150 kubelet[3536]: E0122 00:44:45.324950 3536 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.28:6443: connect: connection refused" logger="UnhandledError" Jan 22 00:44:45.325150 kubelet[3536]: E0122 00:44:45.325044 3536 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-n-d879fbfda5?timeout=10s\": dial tcp 10.200.8.28:6443: connect: connection refused" interval="200ms" Jan 22 00:44:45.331131 kernel: audit: type=1300 audit(1769042685.311:347): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffeaf0d4330 a2=0 a3=0 items=0 ppid=3536 pid=3548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.331202 kernel: audit: type=1327 audit(1769042685.311:347): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 22 00:44:45.311000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 22 00:44:45.331259 kubelet[3536]: I0122 00:44:45.327856 3536 factory.go:221] Registration of the systemd container factory successfully Jan 22 00:44:45.331259 kubelet[3536]: I0122 00:44:45.327913 3536 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 22 00:44:45.331259 kubelet[3536]: E0122 00:44:45.329528 3536 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 22 00:44:45.331259 kubelet[3536]: I0122 00:44:45.329622 3536 factory.go:221] Registration of the containerd container factory successfully Jan 22 00:44:45.325000 audit[3549]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=3549 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:45.325000 audit[3549]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffceadcef90 a2=0 a3=0 items=0 ppid=3536 pid=3549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.341942 kernel: audit: type=1325 audit(1769042685.325:348): table=filter:46 family=2 entries=1 op=nft_register_chain pid=3549 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:45.342000 kernel: audit: type=1300 audit(1769042685.325:348): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffceadcef90 a2=0 a3=0 items=0 ppid=3536 pid=3549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.325000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 22 00:44:45.345315 kernel: audit: type=1327 audit(1769042685.325:348): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 22 00:44:45.332000 audit[3551]: NETFILTER_CFG table=filter:47 family=2 entries=2 op=nft_register_chain pid=3551 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:45.357709 kernel: audit: type=1325 audit(1769042685.332:349): table=filter:47 family=2 entries=2 op=nft_register_chain pid=3551 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:45.357781 kernel: audit: type=1300 audit(1769042685.332:349): arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe33ee9b70 a2=0 a3=0 items=0 ppid=3536 pid=3551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.332000 audit[3551]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe33ee9b70 a2=0 a3=0 items=0 ppid=3536 pid=3551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.332000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 22 00:44:45.367148 kernel: audit: type=1327 audit(1769042685.332:349): proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 22 00:44:45.367206 kernel: audit: type=1325 audit(1769042685.337:350): table=filter:48 family=2 entries=2 op=nft_register_chain pid=3553 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:45.337000 audit[3553]: NETFILTER_CFG table=filter:48 family=2 entries=2 op=nft_register_chain pid=3553 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:45.337000 audit[3553]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc8cddc580 a2=0 a3=0 items=0 ppid=3536 pid=3553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.337000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 22 00:44:45.367000 audit[3558]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=3558 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:45.367000 audit[3558]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffd62121c50 a2=0 a3=0 items=0 ppid=3536 pid=3558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.367000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 22 00:44:45.370038 kubelet[3536]: I0122 00:44:45.369996 3536 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 22 00:44:45.370000 audit[3560]: NETFILTER_CFG table=mangle:50 family=10 entries=2 op=nft_register_chain pid=3560 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:45.370000 audit[3561]: NETFILTER_CFG table=mangle:51 family=2 entries=1 op=nft_register_chain pid=3561 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:45.370000 audit[3561]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffec2f3af80 a2=0 a3=0 items=0 ppid=3536 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.370000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 22 00:44:45.370000 audit[3560]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff5ab34bc0 a2=0 a3=0 items=0 ppid=3536 pid=3560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.370000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 22 00:44:45.372883 kubelet[3536]: I0122 00:44:45.372723 3536 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 22 00:44:45.372883 kubelet[3536]: I0122 00:44:45.372875 3536 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 22 00:44:45.372953 kubelet[3536]: I0122 00:44:45.372892 3536 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 22 00:44:45.372953 kubelet[3536]: I0122 00:44:45.372907 3536 kubelet.go:2382] "Starting kubelet main sync loop" Jan 22 00:44:45.373076 kubelet[3536]: E0122 00:44:45.373038 3536 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 22 00:44:45.372000 audit[3563]: NETFILTER_CFG table=nat:52 family=2 entries=1 op=nft_register_chain pid=3563 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:45.372000 audit[3563]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc5fae2ef0 a2=0 a3=0 items=0 ppid=3536 pid=3563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.372000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 22 00:44:45.374608 kubelet[3536]: W0122 00:44:45.374588 3536 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.28:6443: connect: connection refused Jan 22 00:44:45.374681 kubelet[3536]: E0122 00:44:45.374619 3536 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.28:6443: connect: connection refused" logger="UnhandledError" Jan 22 00:44:45.373000 audit[3564]: NETFILTER_CFG table=mangle:53 family=10 entries=1 op=nft_register_chain pid=3564 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:45.373000 audit[3564]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff7d681a00 a2=0 a3=0 items=0 ppid=3536 pid=3564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.373000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 22 00:44:45.376217 kubelet[3536]: I0122 00:44:45.376203 3536 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 22 00:44:45.376302 kubelet[3536]: I0122 00:44:45.376295 3536 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 22 00:44:45.376350 kubelet[3536]: I0122 00:44:45.376345 3536 state_mem.go:36] "Initialized new in-memory state store" Jan 22 00:44:45.374000 audit[3566]: NETFILTER_CFG table=nat:54 family=10 entries=1 op=nft_register_chain pid=3566 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:45.374000 audit[3566]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdbb47b9f0 a2=0 a3=0 items=0 ppid=3536 pid=3566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.374000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 22 00:44:45.375000 audit[3565]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_chain pid=3565 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:45.375000 audit[3565]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcb138e500 a2=0 a3=0 items=0 ppid=3536 pid=3565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.375000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 22 00:44:45.376000 audit[3567]: NETFILTER_CFG table=filter:56 family=10 entries=1 op=nft_register_chain pid=3567 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:45.376000 audit[3567]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc50ee7310 a2=0 a3=0 items=0 ppid=3536 pid=3567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.376000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 22 00:44:45.381303 kubelet[3536]: I0122 00:44:45.381289 3536 policy_none.go:49] "None policy: Start" Jan 22 00:44:45.381341 kubelet[3536]: I0122 00:44:45.381312 3536 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 22 00:44:45.381341 kubelet[3536]: I0122 00:44:45.381323 3536 state_mem.go:35] "Initializing new in-memory state store" Jan 22 00:44:45.390188 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 22 00:44:45.402719 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 22 00:44:45.411624 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 22 00:44:45.413031 kubelet[3536]: I0122 00:44:45.413010 3536 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 22 00:44:45.413176 kubelet[3536]: I0122 00:44:45.413162 3536 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 22 00:44:45.413208 kubelet[3536]: I0122 00:44:45.413175 3536 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 22 00:44:45.413615 kubelet[3536]: I0122 00:44:45.413598 3536 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 22 00:44:45.414660 kubelet[3536]: E0122 00:44:45.414639 3536 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 22 00:44:45.414757 kubelet[3536]: E0122 00:44:45.414680 3536 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4515.1.0-n-d879fbfda5\" not found" Jan 22 00:44:45.482339 systemd[1]: Created slice kubepods-burstable-podc296d49c8344b7a520639eaafad6c9e5.slice - libcontainer container kubepods-burstable-podc296d49c8344b7a520639eaafad6c9e5.slice. Jan 22 00:44:45.492349 kubelet[3536]: E0122 00:44:45.492325 3536 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-n-d879fbfda5\" not found" node="ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:45.495536 systemd[1]: Created slice kubepods-burstable-pod59e1e5631c24cd5645fbd891095e21c0.slice - libcontainer container kubepods-burstable-pod59e1e5631c24cd5645fbd891095e21c0.slice. Jan 22 00:44:45.500292 kubelet[3536]: E0122 00:44:45.497410 3536 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-n-d879fbfda5\" not found" node="ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:45.500088 systemd[1]: Created slice kubepods-burstable-podb73a4edeb681b5d323e17f7f3e881711.slice - libcontainer container kubepods-burstable-podb73a4edeb681b5d323e17f7f3e881711.slice. Jan 22 00:44:45.501588 kubelet[3536]: E0122 00:44:45.501572 3536 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-n-d879fbfda5\" not found" node="ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:45.514772 kubelet[3536]: I0122 00:44:45.514753 3536 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:45.515096 kubelet[3536]: E0122 00:44:45.515074 3536 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.28:6443/api/v1/nodes\": dial tcp 10.200.8.28:6443: connect: connection refused" node="ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:45.525520 kubelet[3536]: E0122 00:44:45.525492 3536 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-n-d879fbfda5?timeout=10s\": dial tcp 10.200.8.28:6443: connect: connection refused" interval="400ms" Jan 22 00:44:45.622910 kubelet[3536]: I0122 00:44:45.622834 3536 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c296d49c8344b7a520639eaafad6c9e5-ca-certs\") pod \"kube-apiserver-ci-4515.1.0-n-d879fbfda5\" (UID: \"c296d49c8344b7a520639eaafad6c9e5\") " pod="kube-system/kube-apiserver-ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:45.622910 kubelet[3536]: I0122 00:44:45.622874 3536 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/59e1e5631c24cd5645fbd891095e21c0-ca-certs\") pod \"kube-controller-manager-ci-4515.1.0-n-d879fbfda5\" (UID: \"59e1e5631c24cd5645fbd891095e21c0\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:45.622910 kubelet[3536]: I0122 00:44:45.622893 3536 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/59e1e5631c24cd5645fbd891095e21c0-k8s-certs\") pod \"kube-controller-manager-ci-4515.1.0-n-d879fbfda5\" (UID: \"59e1e5631c24cd5645fbd891095e21c0\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:45.623157 kubelet[3536]: I0122 00:44:45.622919 3536 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/59e1e5631c24cd5645fbd891095e21c0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515.1.0-n-d879fbfda5\" (UID: \"59e1e5631c24cd5645fbd891095e21c0\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:45.623157 kubelet[3536]: I0122 00:44:45.622951 3536 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b73a4edeb681b5d323e17f7f3e881711-kubeconfig\") pod \"kube-scheduler-ci-4515.1.0-n-d879fbfda5\" (UID: \"b73a4edeb681b5d323e17f7f3e881711\") " pod="kube-system/kube-scheduler-ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:45.623157 kubelet[3536]: I0122 00:44:45.622966 3536 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c296d49c8344b7a520639eaafad6c9e5-k8s-certs\") pod \"kube-apiserver-ci-4515.1.0-n-d879fbfda5\" (UID: \"c296d49c8344b7a520639eaafad6c9e5\") " pod="kube-system/kube-apiserver-ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:45.623157 kubelet[3536]: I0122 00:44:45.622984 3536 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c296d49c8344b7a520639eaafad6c9e5-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515.1.0-n-d879fbfda5\" (UID: \"c296d49c8344b7a520639eaafad6c9e5\") " pod="kube-system/kube-apiserver-ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:45.623157 kubelet[3536]: I0122 00:44:45.622998 3536 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/59e1e5631c24cd5645fbd891095e21c0-flexvolume-dir\") pod \"kube-controller-manager-ci-4515.1.0-n-d879fbfda5\" (UID: \"59e1e5631c24cd5645fbd891095e21c0\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:45.623241 kubelet[3536]: I0122 00:44:45.623013 3536 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/59e1e5631c24cd5645fbd891095e21c0-kubeconfig\") pod \"kube-controller-manager-ci-4515.1.0-n-d879fbfda5\" (UID: \"59e1e5631c24cd5645fbd891095e21c0\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:45.717804 kubelet[3536]: I0122 00:44:45.717613 3536 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:45.718025 kubelet[3536]: E0122 00:44:45.717961 3536 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.28:6443/api/v1/nodes\": dial tcp 10.200.8.28:6443: connect: connection refused" node="ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:45.794753 containerd[2460]: time="2026-01-22T00:44:45.794374237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515.1.0-n-d879fbfda5,Uid:c296d49c8344b7a520639eaafad6c9e5,Namespace:kube-system,Attempt:0,}" Jan 22 00:44:45.798793 containerd[2460]: time="2026-01-22T00:44:45.798765258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515.1.0-n-d879fbfda5,Uid:59e1e5631c24cd5645fbd891095e21c0,Namespace:kube-system,Attempt:0,}" Jan 22 00:44:45.802397 containerd[2460]: time="2026-01-22T00:44:45.802345244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515.1.0-n-d879fbfda5,Uid:b73a4edeb681b5d323e17f7f3e881711,Namespace:kube-system,Attempt:0,}" Jan 22 00:44:45.875437 containerd[2460]: time="2026-01-22T00:44:45.872429374Z" level=info msg="connecting to shim 8877a52bfb8d9330951011462ee3d7da2a97b947086903b2e1edc2ea61f421e1" address="unix:///run/containerd/s/35e94813a8d3d561ab034203f6401171e4c990692466d83d52964c554b63aeac" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:44:45.886771 containerd[2460]: time="2026-01-22T00:44:45.886356936Z" level=info msg="connecting to shim fc5cec5c77f90696c17feb67ac18e42cf129aa8ec42f65dd27693a32692fee75" address="unix:///run/containerd/s/6c036d9e167be3161cc2b24ea927e4acb76c467594c3953f90ad121dcc094087" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:44:45.896844 containerd[2460]: time="2026-01-22T00:44:45.896811658Z" level=info msg="connecting to shim 8da492c0a07f702c09e4af848ded421666fb80b20464e79be214b72f571dbfe0" address="unix:///run/containerd/s/c5f3054c8a27df19b54303dc451b579c612e3ba011f914549a692b1ed92c8906" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:44:45.921124 systemd[1]: Started cri-containerd-8877a52bfb8d9330951011462ee3d7da2a97b947086903b2e1edc2ea61f421e1.scope - libcontainer container 8877a52bfb8d9330951011462ee3d7da2a97b947086903b2e1edc2ea61f421e1. Jan 22 00:44:45.924787 systemd[1]: Started cri-containerd-fc5cec5c77f90696c17feb67ac18e42cf129aa8ec42f65dd27693a32692fee75.scope - libcontainer container fc5cec5c77f90696c17feb67ac18e42cf129aa8ec42f65dd27693a32692fee75. Jan 22 00:44:45.926653 kubelet[3536]: E0122 00:44:45.926614 3536 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515.1.0-n-d879fbfda5?timeout=10s\": dial tcp 10.200.8.28:6443: connect: connection refused" interval="800ms" Jan 22 00:44:45.940942 systemd[1]: Started cri-containerd-8da492c0a07f702c09e4af848ded421666fb80b20464e79be214b72f571dbfe0.scope - libcontainer container 8da492c0a07f702c09e4af848ded421666fb80b20464e79be214b72f571dbfe0. Jan 22 00:44:45.942000 audit: BPF prog-id=106 op=LOAD Jan 22 00:44:45.943000 audit: BPF prog-id=107 op=LOAD Jan 22 00:44:45.943000 audit[3635]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3589 pid=3635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663356365633563373766393036393663313766656236376163313865 Jan 22 00:44:45.943000 audit: BPF prog-id=107 op=UNLOAD Jan 22 00:44:45.943000 audit[3635]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3589 pid=3635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663356365633563373766393036393663313766656236376163313865 Jan 22 00:44:45.944000 audit: BPF prog-id=108 op=LOAD Jan 22 00:44:45.945000 audit: BPF prog-id=109 op=LOAD Jan 22 00:44:45.945000 audit[3606]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3575 pid=3606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.945000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838373761353262666238643933333039353130313134363265653364 Jan 22 00:44:45.945000 audit: BPF prog-id=109 op=UNLOAD Jan 22 00:44:45.945000 audit[3606]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3575 pid=3606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.945000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838373761353262666238643933333039353130313134363265653364 Jan 22 00:44:45.945000 audit: BPF prog-id=110 op=LOAD Jan 22 00:44:45.945000 audit[3635]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3589 pid=3635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.945000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663356365633563373766393036393663313766656236376163313865 Jan 22 00:44:45.945000 audit: BPF prog-id=111 op=LOAD Jan 22 00:44:45.945000 audit[3635]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3589 pid=3635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.945000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663356365633563373766393036393663313766656236376163313865 Jan 22 00:44:45.945000 audit: BPF prog-id=111 op=UNLOAD Jan 22 00:44:45.945000 audit[3635]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3589 pid=3635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.945000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663356365633563373766393036393663313766656236376163313865 Jan 22 00:44:45.945000 audit: BPF prog-id=110 op=UNLOAD Jan 22 00:44:45.945000 audit[3635]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3589 pid=3635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.945000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663356365633563373766393036393663313766656236376163313865 Jan 22 00:44:45.945000 audit: BPF prog-id=112 op=LOAD Jan 22 00:44:45.945000 audit[3635]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3589 pid=3635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.945000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663356365633563373766393036393663313766656236376163313865 Jan 22 00:44:45.946000 audit: BPF prog-id=113 op=LOAD Jan 22 00:44:45.946000 audit[3606]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3575 pid=3606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.946000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838373761353262666238643933333039353130313134363265653364 Jan 22 00:44:45.946000 audit: BPF prog-id=114 op=LOAD Jan 22 00:44:45.946000 audit[3606]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3575 pid=3606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.946000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838373761353262666238643933333039353130313134363265653364 Jan 22 00:44:45.946000 audit: BPF prog-id=114 op=UNLOAD Jan 22 00:44:45.946000 audit[3606]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3575 pid=3606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.946000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838373761353262666238643933333039353130313134363265653364 Jan 22 00:44:45.946000 audit: BPF prog-id=113 op=UNLOAD Jan 22 00:44:45.946000 audit[3606]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3575 pid=3606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.946000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838373761353262666238643933333039353130313134363265653364 Jan 22 00:44:45.946000 audit: BPF prog-id=115 op=LOAD Jan 22 00:44:45.946000 audit[3606]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3575 pid=3606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.946000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838373761353262666238643933333039353130313134363265653364 Jan 22 00:44:45.953000 audit: BPF prog-id=116 op=LOAD Jan 22 00:44:45.954000 audit: BPF prog-id=117 op=LOAD Jan 22 00:44:45.954000 audit[3644]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3608 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864613439326330613037663730326330396534616638343864656434 Jan 22 00:44:45.954000 audit: BPF prog-id=117 op=UNLOAD Jan 22 00:44:45.954000 audit[3644]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3608 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864613439326330613037663730326330396534616638343864656434 Jan 22 00:44:45.954000 audit: BPF prog-id=118 op=LOAD Jan 22 00:44:45.954000 audit[3644]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3608 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864613439326330613037663730326330396534616638343864656434 Jan 22 00:44:45.954000 audit: BPF prog-id=119 op=LOAD Jan 22 00:44:45.954000 audit[3644]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3608 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864613439326330613037663730326330396534616638343864656434 Jan 22 00:44:45.954000 audit: BPF prog-id=119 op=UNLOAD Jan 22 00:44:45.954000 audit[3644]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3608 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864613439326330613037663730326330396534616638343864656434 Jan 22 00:44:45.954000 audit: BPF prog-id=118 op=UNLOAD Jan 22 00:44:45.954000 audit[3644]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3608 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864613439326330613037663730326330396534616638343864656434 Jan 22 00:44:45.954000 audit: BPF prog-id=120 op=LOAD Jan 22 00:44:45.954000 audit[3644]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3608 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:45.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864613439326330613037663730326330396534616638343864656434 Jan 22 00:44:46.001684 containerd[2460]: time="2026-01-22T00:44:46.001644201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515.1.0-n-d879fbfda5,Uid:59e1e5631c24cd5645fbd891095e21c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc5cec5c77f90696c17feb67ac18e42cf129aa8ec42f65dd27693a32692fee75\"" Jan 22 00:44:46.007583 containerd[2460]: time="2026-01-22T00:44:46.007210060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515.1.0-n-d879fbfda5,Uid:c296d49c8344b7a520639eaafad6c9e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"8877a52bfb8d9330951011462ee3d7da2a97b947086903b2e1edc2ea61f421e1\"" Jan 22 00:44:46.008217 containerd[2460]: time="2026-01-22T00:44:46.008047633Z" level=info msg="CreateContainer within sandbox \"fc5cec5c77f90696c17feb67ac18e42cf129aa8ec42f65dd27693a32692fee75\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 22 00:44:46.010967 containerd[2460]: time="2026-01-22T00:44:46.010938456Z" level=info msg="CreateContainer within sandbox \"8877a52bfb8d9330951011462ee3d7da2a97b947086903b2e1edc2ea61f421e1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 22 00:44:46.020138 containerd[2460]: time="2026-01-22T00:44:46.020097436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515.1.0-n-d879fbfda5,Uid:b73a4edeb681b5d323e17f7f3e881711,Namespace:kube-system,Attempt:0,} returns sandbox id \"8da492c0a07f702c09e4af848ded421666fb80b20464e79be214b72f571dbfe0\"" Jan 22 00:44:46.023715 containerd[2460]: time="2026-01-22T00:44:46.023609156Z" level=info msg="CreateContainer within sandbox \"8da492c0a07f702c09e4af848ded421666fb80b20464e79be214b72f571dbfe0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 22 00:44:46.038950 containerd[2460]: time="2026-01-22T00:44:46.038926938Z" level=info msg="Container ba74f1c0ef8f78ebd6e40b9d5e4e4d4cd498f7593ff85e45685a429363b5869a: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:44:46.046121 containerd[2460]: time="2026-01-22T00:44:46.046050682Z" level=info msg="Container 961cc929a8cbdae4b16397f53091e1aa99ead99f54a94c74eb9b59b1dcb9e921: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:44:46.062503 containerd[2460]: time="2026-01-22T00:44:46.062477594Z" level=info msg="CreateContainer within sandbox \"fc5cec5c77f90696c17feb67ac18e42cf129aa8ec42f65dd27693a32692fee75\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ba74f1c0ef8f78ebd6e40b9d5e4e4d4cd498f7593ff85e45685a429363b5869a\"" Jan 22 00:44:46.062937 containerd[2460]: time="2026-01-22T00:44:46.062914306Z" level=info msg="StartContainer for \"ba74f1c0ef8f78ebd6e40b9d5e4e4d4cd498f7593ff85e45685a429363b5869a\"" Jan 22 00:44:46.063788 containerd[2460]: time="2026-01-22T00:44:46.063763715Z" level=info msg="connecting to shim ba74f1c0ef8f78ebd6e40b9d5e4e4d4cd498f7593ff85e45685a429363b5869a" address="unix:///run/containerd/s/6c036d9e167be3161cc2b24ea927e4acb76c467594c3953f90ad121dcc094087" protocol=ttrpc version=3 Jan 22 00:44:46.077144 containerd[2460]: time="2026-01-22T00:44:46.077120199Z" level=info msg="Container 0c1ac14e1139ae8cf7347182706cdba658617a093853e35461f163cd09092523: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:44:46.081950 systemd[1]: Started cri-containerd-ba74f1c0ef8f78ebd6e40b9d5e4e4d4cd498f7593ff85e45685a429363b5869a.scope - libcontainer container ba74f1c0ef8f78ebd6e40b9d5e4e4d4cd498f7593ff85e45685a429363b5869a. Jan 22 00:44:46.090588 containerd[2460]: time="2026-01-22T00:44:46.090551813Z" level=info msg="CreateContainer within sandbox \"8877a52bfb8d9330951011462ee3d7da2a97b947086903b2e1edc2ea61f421e1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"961cc929a8cbdae4b16397f53091e1aa99ead99f54a94c74eb9b59b1dcb9e921\"" Jan 22 00:44:46.091111 containerd[2460]: time="2026-01-22T00:44:46.091076058Z" level=info msg="StartContainer for \"961cc929a8cbdae4b16397f53091e1aa99ead99f54a94c74eb9b59b1dcb9e921\"" Jan 22 00:44:46.092000 audit: BPF prog-id=121 op=LOAD Jan 22 00:44:46.092000 audit: BPF prog-id=122 op=LOAD Jan 22 00:44:46.092000 audit[3710]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3589 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:46.092000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261373466316330656638663738656264366534306239643565346534 Jan 22 00:44:46.092000 audit: BPF prog-id=122 op=UNLOAD Jan 22 00:44:46.092000 audit[3710]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3589 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:46.092000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261373466316330656638663738656264366534306239643565346534 Jan 22 00:44:46.092000 audit: BPF prog-id=123 op=LOAD Jan 22 00:44:46.092000 audit[3710]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3589 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:46.092000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261373466316330656638663738656264366534306239643565346534 Jan 22 00:44:46.092000 audit: BPF prog-id=124 op=LOAD Jan 22 00:44:46.092000 audit[3710]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3589 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:46.092000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261373466316330656638663738656264366534306239643565346534 Jan 22 00:44:46.092000 audit: BPF prog-id=124 op=UNLOAD Jan 22 00:44:46.092000 audit[3710]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3589 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:46.092000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261373466316330656638663738656264366534306239643565346534 Jan 22 00:44:46.092000 audit: BPF prog-id=123 op=UNLOAD Jan 22 00:44:46.092000 audit[3710]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3589 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:46.092000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261373466316330656638663738656264366534306239643565346534 Jan 22 00:44:46.092000 audit: BPF prog-id=125 op=LOAD Jan 22 00:44:46.092000 audit[3710]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3589 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:46.092000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261373466316330656638663738656264366534306239643565346534 Jan 22 00:44:46.096401 containerd[2460]: time="2026-01-22T00:44:46.096369556Z" level=info msg="CreateContainer within sandbox \"8da492c0a07f702c09e4af848ded421666fb80b20464e79be214b72f571dbfe0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0c1ac14e1139ae8cf7347182706cdba658617a093853e35461f163cd09092523\"" Jan 22 00:44:46.097226 containerd[2460]: time="2026-01-22T00:44:46.097027709Z" level=info msg="connecting to shim 961cc929a8cbdae4b16397f53091e1aa99ead99f54a94c74eb9b59b1dcb9e921" address="unix:///run/containerd/s/35e94813a8d3d561ab034203f6401171e4c990692466d83d52964c554b63aeac" protocol=ttrpc version=3 Jan 22 00:44:46.097896 containerd[2460]: time="2026-01-22T00:44:46.097855075Z" level=info msg="StartContainer for \"0c1ac14e1139ae8cf7347182706cdba658617a093853e35461f163cd09092523\"" Jan 22 00:44:46.100956 containerd[2460]: time="2026-01-22T00:44:46.100901070Z" level=info msg="connecting to shim 0c1ac14e1139ae8cf7347182706cdba658617a093853e35461f163cd09092523" address="unix:///run/containerd/s/c5f3054c8a27df19b54303dc451b579c612e3ba011f914549a692b1ed92c8906" protocol=ttrpc version=3 Jan 22 00:44:46.121026 kubelet[3536]: I0122 00:44:46.121007 3536 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:46.121927 kubelet[3536]: E0122 00:44:46.121820 3536 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.28:6443/api/v1/nodes\": dial tcp 10.200.8.28:6443: connect: connection refused" node="ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:46.122052 systemd[1]: Started cri-containerd-961cc929a8cbdae4b16397f53091e1aa99ead99f54a94c74eb9b59b1dcb9e921.scope - libcontainer container 961cc929a8cbdae4b16397f53091e1aa99ead99f54a94c74eb9b59b1dcb9e921. Jan 22 00:44:46.131983 systemd[1]: Started cri-containerd-0c1ac14e1139ae8cf7347182706cdba658617a093853e35461f163cd09092523.scope - libcontainer container 0c1ac14e1139ae8cf7347182706cdba658617a093853e35461f163cd09092523. Jan 22 00:44:46.153602 containerd[2460]: time="2026-01-22T00:44:46.153477196Z" level=info msg="StartContainer for \"ba74f1c0ef8f78ebd6e40b9d5e4e4d4cd498f7593ff85e45685a429363b5869a\" returns successfully" Jan 22 00:44:46.154000 audit: BPF prog-id=126 op=LOAD Jan 22 00:44:46.154000 audit: BPF prog-id=127 op=LOAD Jan 22 00:44:46.154000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3608 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:46.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063316163313465313133396165386366373334373138323730366364 Jan 22 00:44:46.154000 audit: BPF prog-id=127 op=UNLOAD Jan 22 00:44:46.154000 audit[3732]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3608 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:46.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063316163313465313133396165386366373334373138323730366364 Jan 22 00:44:46.154000 audit: BPF prog-id=128 op=LOAD Jan 22 00:44:46.154000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3608 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:46.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063316163313465313133396165386366373334373138323730366364 Jan 22 00:44:46.154000 audit: BPF prog-id=129 op=LOAD Jan 22 00:44:46.154000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3608 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:46.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063316163313465313133396165386366373334373138323730366364 Jan 22 00:44:46.154000 audit: BPF prog-id=129 op=UNLOAD Jan 22 00:44:46.154000 audit[3732]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3608 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:46.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063316163313465313133396165386366373334373138323730366364 Jan 22 00:44:46.154000 audit: BPF prog-id=128 op=UNLOAD Jan 22 00:44:46.154000 audit[3732]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3608 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:46.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063316163313465313133396165386366373334373138323730366364 Jan 22 00:44:46.155000 audit: BPF prog-id=130 op=LOAD Jan 22 00:44:46.155000 audit[3732]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3608 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:46.155000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063316163313465313133396165386366373334373138323730366364 Jan 22 00:44:46.162000 audit: BPF prog-id=131 op=LOAD Jan 22 00:44:46.163000 audit: BPF prog-id=132 op=LOAD Jan 22 00:44:46.163000 audit[3731]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3575 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:46.163000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936316363393239613863626461653462313633393766353330393165 Jan 22 00:44:46.163000 audit: BPF prog-id=132 op=UNLOAD Jan 22 00:44:46.163000 audit[3731]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3575 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:46.163000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936316363393239613863626461653462313633393766353330393165 Jan 22 00:44:46.163000 audit: BPF prog-id=133 op=LOAD Jan 22 00:44:46.163000 audit[3731]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3575 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:46.163000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936316363393239613863626461653462313633393766353330393165 Jan 22 00:44:46.163000 audit: BPF prog-id=134 op=LOAD Jan 22 00:44:46.163000 audit[3731]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3575 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:46.163000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936316363393239613863626461653462313633393766353330393165 Jan 22 00:44:46.163000 audit: BPF prog-id=134 op=UNLOAD Jan 22 00:44:46.163000 audit[3731]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3575 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:46.163000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936316363393239613863626461653462313633393766353330393165 Jan 22 00:44:46.163000 audit: BPF prog-id=133 op=UNLOAD Jan 22 00:44:46.163000 audit[3731]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3575 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:46.163000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936316363393239613863626461653462313633393766353330393165 Jan 22 00:44:46.163000 audit: BPF prog-id=135 op=LOAD Jan 22 00:44:46.163000 audit[3731]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3575 pid=3731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:46.163000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936316363393239613863626461653462313633393766353330393165 Jan 22 00:44:46.264682 containerd[2460]: time="2026-01-22T00:44:46.264647645Z" level=info msg="StartContainer for \"961cc929a8cbdae4b16397f53091e1aa99ead99f54a94c74eb9b59b1dcb9e921\" returns successfully" Jan 22 00:44:46.265074 containerd[2460]: time="2026-01-22T00:44:46.265051158Z" level=info msg="StartContainer for \"0c1ac14e1139ae8cf7347182706cdba658617a093853e35461f163cd09092523\" returns successfully" Jan 22 00:44:46.304322 kubelet[3536]: W0122 00:44:46.304215 3536 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.28:6443: connect: connection refused Jan 22 00:44:46.304322 kubelet[3536]: E0122 00:44:46.304293 3536 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.28:6443: connect: connection refused" logger="UnhandledError" Jan 22 00:44:46.382185 kubelet[3536]: E0122 00:44:46.382045 3536 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-n-d879fbfda5\" not found" node="ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:46.386327 kubelet[3536]: E0122 00:44:46.386103 3536 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-n-d879fbfda5\" not found" node="ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:46.390234 kubelet[3536]: E0122 00:44:46.390068 3536 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-n-d879fbfda5\" not found" node="ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:46.924827 kubelet[3536]: I0122 00:44:46.924582 3536 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:47.393564 kubelet[3536]: E0122 00:44:47.393174 3536 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-n-d879fbfda5\" not found" node="ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:47.393564 kubelet[3536]: E0122 00:44:47.393460 3536 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515.1.0-n-d879fbfda5\" not found" node="ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:48.220982 kubelet[3536]: E0122 00:44:48.220903 3536 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4515.1.0-n-d879fbfda5\" not found" node="ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:48.295881 kubelet[3536]: I0122 00:44:48.295104 3536 apiserver.go:52] "Watching apiserver" Jan 22 00:44:48.322819 kubelet[3536]: I0122 00:44:48.322794 3536 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 22 00:44:48.327398 kubelet[3536]: I0122 00:44:48.326810 3536 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:48.327398 kubelet[3536]: E0122 00:44:48.326835 3536 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515.1.0-n-d879fbfda5\": node \"ci-4515.1.0-n-d879fbfda5\" not found" Jan 22 00:44:48.391979 kubelet[3536]: I0122 00:44:48.391908 3536 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:48.422087 kubelet[3536]: I0122 00:44:48.422061 3536 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:48.449448 kubelet[3536]: E0122 00:44:48.449403 3536 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515.1.0-n-d879fbfda5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:48.453023 kubelet[3536]: E0122 00:44:48.452567 3536 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515.1.0-n-d879fbfda5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:48.453186 kubelet[3536]: I0122 00:44:48.453138 3536 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:48.458124 kubelet[3536]: E0122 00:44:48.458065 3536 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4515.1.0-n-d879fbfda5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:48.458124 kubelet[3536]: I0122 00:44:48.458086 3536 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:48.459626 kubelet[3536]: E0122 00:44:48.459602 3536 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515.1.0-n-d879fbfda5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:50.277950 systemd[1]: Reload requested from client PID 3807 ('systemctl') (unit session-9.scope)... Jan 22 00:44:50.277964 systemd[1]: Reloading... Jan 22 00:44:50.368774 zram_generator::config[3860]: No configuration found. Jan 22 00:44:50.564977 systemd[1]: Reloading finished in 286 ms. Jan 22 00:44:50.595963 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 00:44:50.615557 systemd[1]: kubelet.service: Deactivated successfully. Jan 22 00:44:50.615828 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:44:50.623897 kernel: kauditd_printk_skb: 158 callbacks suppressed Jan 22 00:44:50.623952 kernel: audit: type=1131 audit(1769042690.615:407): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:44:50.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:44:50.615889 systemd[1]: kubelet.service: Consumed 798ms CPU time, 131.9M memory peak. Jan 22 00:44:50.618720 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 22 00:44:50.626842 kernel: audit: type=1334 audit(1769042690.619:408): prog-id=136 op=LOAD Jan 22 00:44:50.619000 audit: BPF prog-id=136 op=LOAD Jan 22 00:44:50.621000 audit: BPF prog-id=86 op=UNLOAD Jan 22 00:44:50.628090 kernel: audit: type=1334 audit(1769042690.621:409): prog-id=86 op=UNLOAD Jan 22 00:44:50.622000 audit: BPF prog-id=137 op=LOAD Jan 22 00:44:50.629444 kernel: audit: type=1334 audit(1769042690.622:410): prog-id=137 op=LOAD Jan 22 00:44:50.622000 audit: BPF prog-id=103 op=UNLOAD Jan 22 00:44:50.631364 kernel: audit: type=1334 audit(1769042690.622:411): prog-id=103 op=UNLOAD Jan 22 00:44:50.633891 kernel: audit: type=1334 audit(1769042690.622:412): prog-id=138 op=LOAD Jan 22 00:44:50.622000 audit: BPF prog-id=138 op=LOAD Jan 22 00:44:50.622000 audit: BPF prog-id=139 op=LOAD Jan 22 00:44:50.622000 audit: BPF prog-id=104 op=UNLOAD Jan 22 00:44:50.638640 kernel: audit: type=1334 audit(1769042690.622:413): prog-id=139 op=LOAD Jan 22 00:44:50.638694 kernel: audit: type=1334 audit(1769042690.622:414): prog-id=104 op=UNLOAD Jan 22 00:44:50.622000 audit: BPF prog-id=105 op=UNLOAD Jan 22 00:44:50.640952 kernel: audit: type=1334 audit(1769042690.622:415): prog-id=105 op=UNLOAD Jan 22 00:44:50.642277 kernel: audit: type=1334 audit(1769042690.623:416): prog-id=140 op=LOAD Jan 22 00:44:50.623000 audit: BPF prog-id=140 op=LOAD Jan 22 00:44:50.623000 audit: BPF prog-id=99 op=UNLOAD Jan 22 00:44:50.624000 audit: BPF prog-id=141 op=LOAD Jan 22 00:44:50.624000 audit: BPF prog-id=142 op=LOAD Jan 22 00:44:50.624000 audit: BPF prog-id=87 op=UNLOAD Jan 22 00:44:50.624000 audit: BPF prog-id=88 op=UNLOAD Jan 22 00:44:50.625000 audit: BPF prog-id=143 op=LOAD Jan 22 00:44:50.625000 audit: BPF prog-id=89 op=UNLOAD Jan 22 00:44:50.625000 audit: BPF prog-id=144 op=LOAD Jan 22 00:44:50.625000 audit: BPF prog-id=145 op=LOAD Jan 22 00:44:50.625000 audit: BPF prog-id=90 op=UNLOAD Jan 22 00:44:50.625000 audit: BPF prog-id=91 op=UNLOAD Jan 22 00:44:50.632000 audit: BPF prog-id=146 op=LOAD Jan 22 00:44:50.632000 audit: BPF prog-id=92 op=UNLOAD Jan 22 00:44:50.633000 audit: BPF prog-id=147 op=LOAD Jan 22 00:44:50.633000 audit: BPF prog-id=148 op=LOAD Jan 22 00:44:50.633000 audit: BPF prog-id=93 op=UNLOAD Jan 22 00:44:50.633000 audit: BPF prog-id=94 op=UNLOAD Jan 22 00:44:50.634000 audit: BPF prog-id=149 op=LOAD Jan 22 00:44:50.635000 audit: BPF prog-id=95 op=UNLOAD Jan 22 00:44:50.635000 audit: BPF prog-id=150 op=LOAD Jan 22 00:44:50.635000 audit: BPF prog-id=96 op=UNLOAD Jan 22 00:44:50.635000 audit: BPF prog-id=151 op=LOAD Jan 22 00:44:50.635000 audit: BPF prog-id=152 op=LOAD Jan 22 00:44:50.635000 audit: BPF prog-id=97 op=UNLOAD Jan 22 00:44:50.635000 audit: BPF prog-id=98 op=UNLOAD Jan 22 00:44:50.636000 audit: BPF prog-id=153 op=LOAD Jan 22 00:44:50.636000 audit: BPF prog-id=100 op=UNLOAD Jan 22 00:44:50.636000 audit: BPF prog-id=154 op=LOAD Jan 22 00:44:50.636000 audit: BPF prog-id=155 op=LOAD Jan 22 00:44:50.636000 audit: BPF prog-id=101 op=UNLOAD Jan 22 00:44:50.636000 audit: BPF prog-id=102 op=UNLOAD Jan 22 00:44:51.083860 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 22 00:44:51.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:44:51.090977 (kubelet)[3924]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 22 00:44:51.127825 kubelet[3924]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 00:44:51.127825 kubelet[3924]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 22 00:44:51.127825 kubelet[3924]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 00:44:51.128135 kubelet[3924]: I0122 00:44:51.127883 3924 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 22 00:44:51.134087 kubelet[3924]: I0122 00:44:51.134064 3924 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 22 00:44:51.134087 kubelet[3924]: I0122 00:44:51.134081 3924 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 22 00:44:51.134695 kubelet[3924]: I0122 00:44:51.134651 3924 server.go:954] "Client rotation is on, will bootstrap in background" Jan 22 00:44:51.137771 kubelet[3924]: I0122 00:44:51.137001 3924 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 22 00:44:51.140316 kubelet[3924]: I0122 00:44:51.140290 3924 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 22 00:44:51.145037 kubelet[3924]: I0122 00:44:51.145018 3924 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 22 00:44:51.148040 kubelet[3924]: I0122 00:44:51.148020 3924 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 22 00:44:51.148212 kubelet[3924]: I0122 00:44:51.148180 3924 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 22 00:44:51.148352 kubelet[3924]: I0122 00:44:51.148213 3924 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515.1.0-n-d879fbfda5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 22 00:44:51.148442 kubelet[3924]: I0122 00:44:51.148360 3924 topology_manager.go:138] "Creating topology manager with none policy" Jan 22 00:44:51.148442 kubelet[3924]: I0122 00:44:51.148370 3924 container_manager_linux.go:304] "Creating device plugin manager" Jan 22 00:44:51.148442 kubelet[3924]: I0122 00:44:51.148418 3924 state_mem.go:36] "Initialized new in-memory state store" Jan 22 00:44:51.148544 kubelet[3924]: I0122 00:44:51.148534 3924 kubelet.go:446] "Attempting to sync node with API server" Jan 22 00:44:51.148568 kubelet[3924]: I0122 00:44:51.148554 3924 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 22 00:44:51.148596 kubelet[3924]: I0122 00:44:51.148576 3924 kubelet.go:352] "Adding apiserver pod source" Jan 22 00:44:51.148596 kubelet[3924]: I0122 00:44:51.148585 3924 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 22 00:44:51.151792 kubelet[3924]: I0122 00:44:51.151774 3924 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 22 00:44:51.152195 kubelet[3924]: I0122 00:44:51.152173 3924 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 22 00:44:51.152601 kubelet[3924]: I0122 00:44:51.152588 3924 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 22 00:44:51.152644 kubelet[3924]: I0122 00:44:51.152615 3924 server.go:1287] "Started kubelet" Jan 22 00:44:51.156707 kubelet[3924]: I0122 00:44:51.156680 3924 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 22 00:44:51.157577 kubelet[3924]: I0122 00:44:51.157553 3924 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 22 00:44:51.158935 kubelet[3924]: I0122 00:44:51.158919 3924 server.go:479] "Adding debug handlers to kubelet server" Jan 22 00:44:51.161272 kubelet[3924]: I0122 00:44:51.161253 3924 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 22 00:44:51.175677 kubelet[3924]: I0122 00:44:51.163810 3924 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 22 00:44:51.176996 kubelet[3924]: I0122 00:44:51.163841 3924 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 22 00:44:51.176996 kubelet[3924]: I0122 00:44:51.163896 3924 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 22 00:44:51.177172 kubelet[3924]: I0122 00:44:51.177156 3924 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 22 00:44:51.177201 kubelet[3924]: E0122 00:44:51.163971 3924 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515.1.0-n-d879fbfda5\" not found" Jan 22 00:44:51.177295 kubelet[3924]: I0122 00:44:51.177286 3924 reconciler.go:26] "Reconciler: start to sync state" Jan 22 00:44:51.181728 kubelet[3924]: I0122 00:44:51.181707 3924 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 22 00:44:51.183680 kubelet[3924]: I0122 00:44:51.183635 3924 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 22 00:44:51.183680 kubelet[3924]: I0122 00:44:51.183682 3924 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 22 00:44:51.183804 kubelet[3924]: I0122 00:44:51.183702 3924 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 22 00:44:51.183804 kubelet[3924]: I0122 00:44:51.183709 3924 kubelet.go:2382] "Starting kubelet main sync loop" Jan 22 00:44:51.183804 kubelet[3924]: E0122 00:44:51.183784 3924 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 22 00:44:51.187233 kubelet[3924]: I0122 00:44:51.187200 3924 factory.go:221] Registration of the systemd container factory successfully Jan 22 00:44:51.187383 kubelet[3924]: I0122 00:44:51.187368 3924 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 22 00:44:51.189974 kubelet[3924]: E0122 00:44:51.189959 3924 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 22 00:44:51.190138 kubelet[3924]: I0122 00:44:51.190130 3924 factory.go:221] Registration of the containerd container factory successfully Jan 22 00:44:51.252482 kubelet[3924]: I0122 00:44:51.252463 3924 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 22 00:44:51.252482 kubelet[3924]: I0122 00:44:51.252478 3924 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 22 00:44:51.252590 kubelet[3924]: I0122 00:44:51.252496 3924 state_mem.go:36] "Initialized new in-memory state store" Jan 22 00:44:51.252769 kubelet[3924]: I0122 00:44:51.252627 3924 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 22 00:44:51.252769 kubelet[3924]: I0122 00:44:51.252638 3924 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 22 00:44:51.252769 kubelet[3924]: I0122 00:44:51.252658 3924 policy_none.go:49] "None policy: Start" Jan 22 00:44:51.252769 kubelet[3924]: I0122 00:44:51.252668 3924 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 22 00:44:51.252769 kubelet[3924]: I0122 00:44:51.252676 3924 state_mem.go:35] "Initializing new in-memory state store" Jan 22 00:44:51.252893 kubelet[3924]: I0122 00:44:51.252877 3924 state_mem.go:75] "Updated machine memory state" Jan 22 00:44:51.256020 kubelet[3924]: I0122 00:44:51.256001 3924 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 22 00:44:51.256129 kubelet[3924]: I0122 00:44:51.256118 3924 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 22 00:44:51.256164 kubelet[3924]: I0122 00:44:51.256131 3924 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 22 00:44:51.257254 kubelet[3924]: I0122 00:44:51.257237 3924 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 22 00:44:51.258053 kubelet[3924]: E0122 00:44:51.258021 3924 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 22 00:44:51.286352 kubelet[3924]: I0122 00:44:51.284984 3924 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:51.286352 kubelet[3924]: I0122 00:44:51.285070 3924 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:51.286652 kubelet[3924]: I0122 00:44:51.286640 3924 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:51.293071 kubelet[3924]: W0122 00:44:51.293043 3924 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 22 00:44:51.297639 kubelet[3924]: W0122 00:44:51.296384 3924 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 22 00:44:51.297639 kubelet[3924]: W0122 00:44:51.296527 3924 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 22 00:44:51.360926 kubelet[3924]: I0122 00:44:51.360484 3924 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:51.372524 kubelet[3924]: I0122 00:44:51.372449 3924 kubelet_node_status.go:124] "Node was previously registered" node="ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:51.372524 kubelet[3924]: I0122 00:44:51.372523 3924 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:51.379428 kubelet[3924]: I0122 00:44:51.377934 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/59e1e5631c24cd5645fbd891095e21c0-k8s-certs\") pod \"kube-controller-manager-ci-4515.1.0-n-d879fbfda5\" (UID: \"59e1e5631c24cd5645fbd891095e21c0\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:51.379428 kubelet[3924]: I0122 00:44:51.377967 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/59e1e5631c24cd5645fbd891095e21c0-kubeconfig\") pod \"kube-controller-manager-ci-4515.1.0-n-d879fbfda5\" (UID: \"59e1e5631c24cd5645fbd891095e21c0\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:51.379428 kubelet[3924]: I0122 00:44:51.378027 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/59e1e5631c24cd5645fbd891095e21c0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515.1.0-n-d879fbfda5\" (UID: \"59e1e5631c24cd5645fbd891095e21c0\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:51.379428 kubelet[3924]: I0122 00:44:51.378047 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c296d49c8344b7a520639eaafad6c9e5-ca-certs\") pod \"kube-apiserver-ci-4515.1.0-n-d879fbfda5\" (UID: \"c296d49c8344b7a520639eaafad6c9e5\") " pod="kube-system/kube-apiserver-ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:51.379428 kubelet[3924]: I0122 00:44:51.378175 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c296d49c8344b7a520639eaafad6c9e5-k8s-certs\") pod \"kube-apiserver-ci-4515.1.0-n-d879fbfda5\" (UID: \"c296d49c8344b7a520639eaafad6c9e5\") " pod="kube-system/kube-apiserver-ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:51.379567 kubelet[3924]: I0122 00:44:51.378196 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c296d49c8344b7a520639eaafad6c9e5-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515.1.0-n-d879fbfda5\" (UID: \"c296d49c8344b7a520639eaafad6c9e5\") " pod="kube-system/kube-apiserver-ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:51.379567 kubelet[3924]: I0122 00:44:51.378238 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/59e1e5631c24cd5645fbd891095e21c0-ca-certs\") pod \"kube-controller-manager-ci-4515.1.0-n-d879fbfda5\" (UID: \"59e1e5631c24cd5645fbd891095e21c0\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:51.379567 kubelet[3924]: I0122 00:44:51.378258 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/59e1e5631c24cd5645fbd891095e21c0-flexvolume-dir\") pod \"kube-controller-manager-ci-4515.1.0-n-d879fbfda5\" (UID: \"59e1e5631c24cd5645fbd891095e21c0\") " pod="kube-system/kube-controller-manager-ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:51.379567 kubelet[3924]: I0122 00:44:51.378275 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b73a4edeb681b5d323e17f7f3e881711-kubeconfig\") pod \"kube-scheduler-ci-4515.1.0-n-d879fbfda5\" (UID: \"b73a4edeb681b5d323e17f7f3e881711\") " pod="kube-system/kube-scheduler-ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:52.149548 kubelet[3924]: I0122 00:44:52.149514 3924 apiserver.go:52] "Watching apiserver" Jan 22 00:44:52.177919 kubelet[3924]: I0122 00:44:52.177888 3924 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 22 00:44:52.226303 kubelet[3924]: I0122 00:44:52.226279 3924 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:52.226707 kubelet[3924]: I0122 00:44:52.226390 3924 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:52.241920 kubelet[3924]: W0122 00:44:52.241897 3924 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 22 00:44:52.242028 kubelet[3924]: E0122 00:44:52.241954 3924 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515.1.0-n-d879fbfda5\" already exists" pod="kube-system/kube-scheduler-ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:52.243618 kubelet[3924]: W0122 00:44:52.243594 3924 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 22 00:44:52.243685 kubelet[3924]: E0122 00:44:52.243647 3924 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515.1.0-n-d879fbfda5\" already exists" pod="kube-system/kube-apiserver-ci-4515.1.0-n-d879fbfda5" Jan 22 00:44:52.257630 kubelet[3924]: I0122 00:44:52.257508 3924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4515.1.0-n-d879fbfda5" podStartSLOduration=1.25749486 podStartE2EDuration="1.25749486s" podCreationTimestamp="2026-01-22 00:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 00:44:52.256034159 +0000 UTC m=+1.162267855" watchObservedRunningTime="2026-01-22 00:44:52.25749486 +0000 UTC m=+1.163728553" Jan 22 00:44:52.291137 kubelet[3924]: I0122 00:44:52.291093 3924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4515.1.0-n-d879fbfda5" podStartSLOduration=1.291080924 podStartE2EDuration="1.291080924s" podCreationTimestamp="2026-01-22 00:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 00:44:52.274897613 +0000 UTC m=+1.181131306" watchObservedRunningTime="2026-01-22 00:44:52.291080924 +0000 UTC m=+1.197314608" Jan 22 00:44:52.300759 kubelet[3924]: I0122 00:44:52.300689 3924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4515.1.0-n-d879fbfda5" podStartSLOduration=1.300674468 podStartE2EDuration="1.300674468s" podCreationTimestamp="2026-01-22 00:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 00:44:52.292342839 +0000 UTC m=+1.198576526" watchObservedRunningTime="2026-01-22 00:44:52.300674468 +0000 UTC m=+1.206908161" Jan 22 00:44:55.797424 kubelet[3924]: I0122 00:44:55.797388 3924 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 22 00:44:55.797872 containerd[2460]: time="2026-01-22T00:44:55.797839643Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 22 00:44:55.798125 kubelet[3924]: I0122 00:44:55.797996 3924 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 22 00:44:56.495173 systemd[1]: Created slice kubepods-besteffort-pod0fd98442_22c8_4fe7_b5fd_9ddd48c5c3ed.slice - libcontainer container kubepods-besteffort-pod0fd98442_22c8_4fe7_b5fd_9ddd48c5c3ed.slice. Jan 22 00:44:56.505391 kubelet[3924]: I0122 00:44:56.505359 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0fd98442-22c8-4fe7-b5fd-9ddd48c5c3ed-kube-proxy\") pod \"kube-proxy-qlzjl\" (UID: \"0fd98442-22c8-4fe7-b5fd-9ddd48c5c3ed\") " pod="kube-system/kube-proxy-qlzjl" Jan 22 00:44:56.505391 kubelet[3924]: I0122 00:44:56.505398 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgpmc\" (UniqueName: \"kubernetes.io/projected/0fd98442-22c8-4fe7-b5fd-9ddd48c5c3ed-kube-api-access-mgpmc\") pod \"kube-proxy-qlzjl\" (UID: \"0fd98442-22c8-4fe7-b5fd-9ddd48c5c3ed\") " pod="kube-system/kube-proxy-qlzjl" Jan 22 00:44:56.505584 kubelet[3924]: I0122 00:44:56.505419 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0fd98442-22c8-4fe7-b5fd-9ddd48c5c3ed-xtables-lock\") pod \"kube-proxy-qlzjl\" (UID: \"0fd98442-22c8-4fe7-b5fd-9ddd48c5c3ed\") " pod="kube-system/kube-proxy-qlzjl" Jan 22 00:44:56.505584 kubelet[3924]: I0122 00:44:56.505434 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0fd98442-22c8-4fe7-b5fd-9ddd48c5c3ed-lib-modules\") pod \"kube-proxy-qlzjl\" (UID: \"0fd98442-22c8-4fe7-b5fd-9ddd48c5c3ed\") " pod="kube-system/kube-proxy-qlzjl" Jan 22 00:44:56.610808 kubelet[3924]: E0122 00:44:56.610761 3924 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 22 00:44:56.610808 kubelet[3924]: E0122 00:44:56.610805 3924 projected.go:194] Error preparing data for projected volume kube-api-access-mgpmc for pod kube-system/kube-proxy-qlzjl: configmap "kube-root-ca.crt" not found Jan 22 00:44:56.610967 kubelet[3924]: E0122 00:44:56.610870 3924 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0fd98442-22c8-4fe7-b5fd-9ddd48c5c3ed-kube-api-access-mgpmc podName:0fd98442-22c8-4fe7-b5fd-9ddd48c5c3ed nodeName:}" failed. No retries permitted until 2026-01-22 00:44:57.110850791 +0000 UTC m=+6.017084480 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mgpmc" (UniqueName: "kubernetes.io/projected/0fd98442-22c8-4fe7-b5fd-9ddd48c5c3ed-kube-api-access-mgpmc") pod "kube-proxy-qlzjl" (UID: "0fd98442-22c8-4fe7-b5fd-9ddd48c5c3ed") : configmap "kube-root-ca.crt" not found Jan 22 00:44:56.961756 systemd[1]: Created slice kubepods-besteffort-podf9866ed9_0718_4411_8fbc_d928284b19ad.slice - libcontainer container kubepods-besteffort-podf9866ed9_0718_4411_8fbc_d928284b19ad.slice. Jan 22 00:44:57.007675 kubelet[3924]: I0122 00:44:57.007639 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5vd8\" (UniqueName: \"kubernetes.io/projected/f9866ed9-0718-4411-8fbc-d928284b19ad-kube-api-access-v5vd8\") pod \"tigera-operator-7dcd859c48-7qqls\" (UID: \"f9866ed9-0718-4411-8fbc-d928284b19ad\") " pod="tigera-operator/tigera-operator-7dcd859c48-7qqls" Jan 22 00:44:57.007675 kubelet[3924]: I0122 00:44:57.007672 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f9866ed9-0718-4411-8fbc-d928284b19ad-var-lib-calico\") pod \"tigera-operator-7dcd859c48-7qqls\" (UID: \"f9866ed9-0718-4411-8fbc-d928284b19ad\") " pod="tigera-operator/tigera-operator-7dcd859c48-7qqls" Jan 22 00:44:57.264567 containerd[2460]: time="2026-01-22T00:44:57.264454908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-7qqls,Uid:f9866ed9-0718-4411-8fbc-d928284b19ad,Namespace:tigera-operator,Attempt:0,}" Jan 22 00:44:57.332677 containerd[2460]: time="2026-01-22T00:44:57.332624586Z" level=info msg="connecting to shim f8fce88153c1ff329ff0cd85b94740ac8831c21a62f7f94260b98fe1142d170b" address="unix:///run/containerd/s/c88d39994924b04bd54a3fe750cab2ee197088798fd817ebdc345962f17bcb58" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:44:57.356925 systemd[1]: Started cri-containerd-f8fce88153c1ff329ff0cd85b94740ac8831c21a62f7f94260b98fe1142d170b.scope - libcontainer container f8fce88153c1ff329ff0cd85b94740ac8831c21a62f7f94260b98fe1142d170b. Jan 22 00:44:57.364000 audit: BPF prog-id=156 op=LOAD Jan 22 00:44:57.366064 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 22 00:44:57.366111 kernel: audit: type=1334 audit(1769042697.364:449): prog-id=156 op=LOAD Jan 22 00:44:57.365000 audit: BPF prog-id=157 op=LOAD Jan 22 00:44:57.368901 kernel: audit: type=1334 audit(1769042697.365:450): prog-id=157 op=LOAD Jan 22 00:44:57.369808 kernel: audit: type=1300 audit(1769042697.365:450): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3978 pid=3989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:57.365000 audit[3989]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3978 pid=3989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:57.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638666365383831353363316666333239666630636438356239343734 Jan 22 00:44:57.380619 kernel: audit: type=1327 audit(1769042697.365:450): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638666365383831353363316666333239666630636438356239343734 Jan 22 00:44:57.382635 kernel: audit: type=1334 audit(1769042697.365:451): prog-id=157 op=UNLOAD Jan 22 00:44:57.365000 audit: BPF prog-id=157 op=UNLOAD Jan 22 00:44:57.365000 audit[3989]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3978 pid=3989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:57.396295 kernel: audit: type=1300 audit(1769042697.365:451): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3978 pid=3989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:57.396348 kernel: audit: type=1327 audit(1769042697.365:451): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638666365383831353363316666333239666630636438356239343734 Jan 22 00:44:57.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638666365383831353363316666333239666630636438356239343734 Jan 22 00:44:57.398250 kernel: audit: type=1334 audit(1769042697.365:452): prog-id=158 op=LOAD Jan 22 00:44:57.365000 audit: BPF prog-id=158 op=LOAD Jan 22 00:44:57.365000 audit[3989]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3978 pid=3989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:57.402633 kernel: audit: type=1300 audit(1769042697.365:452): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3978 pid=3989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:57.404369 containerd[2460]: time="2026-01-22T00:44:57.404338377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qlzjl,Uid:0fd98442-22c8-4fe7-b5fd-9ddd48c5c3ed,Namespace:kube-system,Attempt:0,}" Jan 22 00:44:57.405226 kernel: audit: type=1327 audit(1769042697.365:452): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638666365383831353363316666333239666630636438356239343734 Jan 22 00:44:57.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638666365383831353363316666333239666630636438356239343734 Jan 22 00:44:57.365000 audit: BPF prog-id=159 op=LOAD Jan 22 00:44:57.365000 audit[3989]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3978 pid=3989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:57.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638666365383831353363316666333239666630636438356239343734 Jan 22 00:44:57.365000 audit: BPF prog-id=159 op=UNLOAD Jan 22 00:44:57.365000 audit[3989]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3978 pid=3989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:57.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638666365383831353363316666333239666630636438356239343734 Jan 22 00:44:57.365000 audit: BPF prog-id=158 op=UNLOAD Jan 22 00:44:57.365000 audit[3989]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3978 pid=3989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:57.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638666365383831353363316666333239666630636438356239343734 Jan 22 00:44:57.365000 audit: BPF prog-id=160 op=LOAD Jan 22 00:44:57.365000 audit[3989]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3978 pid=3989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:57.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638666365383831353363316666333239666630636438356239343734 Jan 22 00:44:57.425915 containerd[2460]: time="2026-01-22T00:44:57.425827386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-7qqls,Uid:f9866ed9-0718-4411-8fbc-d928284b19ad,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f8fce88153c1ff329ff0cd85b94740ac8831c21a62f7f94260b98fe1142d170b\"" Jan 22 00:44:57.428070 containerd[2460]: time="2026-01-22T00:44:57.428041532Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 22 00:44:57.453694 containerd[2460]: time="2026-01-22T00:44:57.453658160Z" level=info msg="connecting to shim 3204557e7d5578ef63fef709c6b9d7dfd98ab1977f59038d2a1626d345d75a00" address="unix:///run/containerd/s/840673c120995df86663ba4f4d6ea92bbdf23a4c3880491d1054bdfc99188123" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:44:57.470890 systemd[1]: Started cri-containerd-3204557e7d5578ef63fef709c6b9d7dfd98ab1977f59038d2a1626d345d75a00.scope - libcontainer container 3204557e7d5578ef63fef709c6b9d7dfd98ab1977f59038d2a1626d345d75a00. Jan 22 00:44:57.477000 audit: BPF prog-id=161 op=LOAD Jan 22 00:44:57.477000 audit: BPF prog-id=162 op=LOAD Jan 22 00:44:57.477000 audit[4034]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4023 pid=4034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:57.477000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332303435353765376435353738656636336665663730396336623964 Jan 22 00:44:57.478000 audit: BPF prog-id=162 op=UNLOAD Jan 22 00:44:57.478000 audit[4034]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4023 pid=4034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:57.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332303435353765376435353738656636336665663730396336623964 Jan 22 00:44:57.478000 audit: BPF prog-id=163 op=LOAD Jan 22 00:44:57.478000 audit[4034]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4023 pid=4034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:57.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332303435353765376435353738656636336665663730396336623964 Jan 22 00:44:57.478000 audit: BPF prog-id=164 op=LOAD Jan 22 00:44:57.478000 audit[4034]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4023 pid=4034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:57.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332303435353765376435353738656636336665663730396336623964 Jan 22 00:44:57.478000 audit: BPF prog-id=164 op=UNLOAD Jan 22 00:44:57.478000 audit[4034]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4023 pid=4034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:57.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332303435353765376435353738656636336665663730396336623964 Jan 22 00:44:57.478000 audit: BPF prog-id=163 op=UNLOAD Jan 22 00:44:57.478000 audit[4034]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4023 pid=4034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:57.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332303435353765376435353738656636336665663730396336623964 Jan 22 00:44:57.478000 audit: BPF prog-id=165 op=LOAD Jan 22 00:44:57.478000 audit[4034]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4023 pid=4034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:57.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332303435353765376435353738656636336665663730396336623964 Jan 22 00:44:57.517030 containerd[2460]: time="2026-01-22T00:44:57.516925561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qlzjl,Uid:0fd98442-22c8-4fe7-b5fd-9ddd48c5c3ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"3204557e7d5578ef63fef709c6b9d7dfd98ab1977f59038d2a1626d345d75a00\"" Jan 22 00:44:57.519954 containerd[2460]: time="2026-01-22T00:44:57.519927196Z" level=info msg="CreateContainer within sandbox \"3204557e7d5578ef63fef709c6b9d7dfd98ab1977f59038d2a1626d345d75a00\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 22 00:44:58.616476 containerd[2460]: time="2026-01-22T00:44:58.616164662Z" level=info msg="Container 26a8e35e615f4c03ac943007ee1d450be11e9f9cf7101836b954540ab4738766: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:44:58.633848 containerd[2460]: time="2026-01-22T00:44:58.633816967Z" level=info msg="CreateContainer within sandbox \"3204557e7d5578ef63fef709c6b9d7dfd98ab1977f59038d2a1626d345d75a00\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"26a8e35e615f4c03ac943007ee1d450be11e9f9cf7101836b954540ab4738766\"" Jan 22 00:44:58.634479 containerd[2460]: time="2026-01-22T00:44:58.634425714Z" level=info msg="StartContainer for \"26a8e35e615f4c03ac943007ee1d450be11e9f9cf7101836b954540ab4738766\"" Jan 22 00:44:58.635875 containerd[2460]: time="2026-01-22T00:44:58.635844753Z" level=info msg="connecting to shim 26a8e35e615f4c03ac943007ee1d450be11e9f9cf7101836b954540ab4738766" address="unix:///run/containerd/s/840673c120995df86663ba4f4d6ea92bbdf23a4c3880491d1054bdfc99188123" protocol=ttrpc version=3 Jan 22 00:44:58.657919 systemd[1]: Started cri-containerd-26a8e35e615f4c03ac943007ee1d450be11e9f9cf7101836b954540ab4738766.scope - libcontainer container 26a8e35e615f4c03ac943007ee1d450be11e9f9cf7101836b954540ab4738766. Jan 22 00:44:58.693000 audit: BPF prog-id=166 op=LOAD Jan 22 00:44:58.693000 audit[4060]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4023 pid=4060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:58.693000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236613865333565363135663463303361633934333030376565316434 Jan 22 00:44:58.693000 audit: BPF prog-id=167 op=LOAD Jan 22 00:44:58.693000 audit[4060]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4023 pid=4060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:58.693000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236613865333565363135663463303361633934333030376565316434 Jan 22 00:44:58.693000 audit: BPF prog-id=167 op=UNLOAD Jan 22 00:44:58.693000 audit[4060]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4023 pid=4060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:58.693000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236613865333565363135663463303361633934333030376565316434 Jan 22 00:44:58.693000 audit: BPF prog-id=166 op=UNLOAD Jan 22 00:44:58.693000 audit[4060]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4023 pid=4060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:58.693000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236613865333565363135663463303361633934333030376565316434 Jan 22 00:44:58.693000 audit: BPF prog-id=168 op=LOAD Jan 22 00:44:58.693000 audit[4060]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4023 pid=4060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:58.693000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236613865333565363135663463303361633934333030376565316434 Jan 22 00:44:58.721037 containerd[2460]: time="2026-01-22T00:44:58.721003596Z" level=info msg="StartContainer for \"26a8e35e615f4c03ac943007ee1d450be11e9f9cf7101836b954540ab4738766\" returns successfully" Jan 22 00:44:58.812000 audit[4124]: NETFILTER_CFG table=mangle:57 family=10 entries=1 op=nft_register_chain pid=4124 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:58.812000 audit[4124]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd692cf500 a2=0 a3=7ffd692cf4ec items=0 ppid=4073 pid=4124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:58.812000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 22 00:44:58.815000 audit[4123]: NETFILTER_CFG table=mangle:58 family=2 entries=1 op=nft_register_chain pid=4123 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:58.815000 audit[4123]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffeb281b90 a2=0 a3=7fffeb281b7c items=0 ppid=4073 pid=4123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:58.815000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 22 00:44:58.817000 audit[4127]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_chain pid=4127 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:58.817000 audit[4127]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe0fa1e1d0 a2=0 a3=7ffe0fa1e1bc items=0 ppid=4073 pid=4127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:58.817000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 22 00:44:58.818000 audit[4128]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=4128 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:58.818000 audit[4128]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc34297dd0 a2=0 a3=7ffc34297dbc items=0 ppid=4073 pid=4128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:58.818000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 22 00:44:58.819000 audit[4126]: NETFILTER_CFG table=nat:61 family=10 entries=1 op=nft_register_chain pid=4126 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:58.819000 audit[4126]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcfbf67e90 a2=0 a3=7ffcfbf67e7c items=0 ppid=4073 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:58.819000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 22 00:44:58.822000 audit[4129]: NETFILTER_CFG table=filter:62 family=10 entries=1 op=nft_register_chain pid=4129 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:58.822000 audit[4129]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd14f38d50 a2=0 a3=7ffd14f38d3c items=0 ppid=4073 pid=4129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:58.822000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 22 00:44:58.915000 audit[4130]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=4130 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:58.915000 audit[4130]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe41605fa0 a2=0 a3=7ffe41605f8c items=0 ppid=4073 pid=4130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:58.915000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 22 00:44:58.918000 audit[4132]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=4132 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:58.918000 audit[4132]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc771d28a0 a2=0 a3=7ffc771d288c items=0 ppid=4073 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:58.918000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 22 00:44:58.922000 audit[4135]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_rule pid=4135 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:58.922000 audit[4135]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe1d3bc9e0 a2=0 a3=7ffe1d3bc9cc items=0 ppid=4073 pid=4135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:58.922000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 22 00:44:58.923000 audit[4136]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_chain pid=4136 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:58.923000 audit[4136]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd116c5920 a2=0 a3=7ffd116c590c items=0 ppid=4073 pid=4136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:58.923000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 22 00:44:58.925000 audit[4138]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=4138 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:58.925000 audit[4138]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffef4bbfa0 a2=0 a3=7fffef4bbf8c items=0 ppid=4073 pid=4138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:58.925000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 22 00:44:58.926000 audit[4139]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=4139 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:58.926000 audit[4139]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe05d33ee0 a2=0 a3=7ffe05d33ecc items=0 ppid=4073 pid=4139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:58.926000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 22 00:44:58.929000 audit[4141]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=4141 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:58.929000 audit[4141]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe0b38aa80 a2=0 a3=7ffe0b38aa6c items=0 ppid=4073 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:58.929000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 22 00:44:58.932000 audit[4144]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_rule pid=4144 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:58.932000 audit[4144]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd966052c0 a2=0 a3=7ffd966052ac items=0 ppid=4073 pid=4144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:58.932000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 22 00:44:58.933000 audit[4145]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_chain pid=4145 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:58.933000 audit[4145]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd74de570 a2=0 a3=7ffcd74de55c items=0 ppid=4073 pid=4145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:58.933000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 22 00:44:58.936000 audit[4147]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=4147 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:58.936000 audit[4147]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd8d47f750 a2=0 a3=7ffd8d47f73c items=0 ppid=4073 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:58.936000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 22 00:44:58.937000 audit[4148]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=4148 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:58.937000 audit[4148]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc002272f0 a2=0 a3=7ffc002272dc items=0 ppid=4073 pid=4148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:58.937000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 22 00:44:58.939000 audit[4150]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=4150 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:58.939000 audit[4150]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffee896ec60 a2=0 a3=7ffee896ec4c items=0 ppid=4073 pid=4150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:58.939000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 22 00:44:58.942000 audit[4153]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_rule pid=4153 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:58.942000 audit[4153]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe84651a20 a2=0 a3=7ffe84651a0c items=0 ppid=4073 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:58.942000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 22 00:44:58.946000 audit[4156]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=4156 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:58.946000 audit[4156]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc024aa0b0 a2=0 a3=7ffc024aa09c items=0 ppid=4073 pid=4156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:58.946000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 22 00:44:58.947000 audit[4157]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=4157 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:58.947000 audit[4157]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd7f9192f0 a2=0 a3=7ffd7f9192dc items=0 ppid=4073 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:58.947000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 22 00:44:58.949000 audit[4159]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=4159 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:58.949000 audit[4159]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc453992c0 a2=0 a3=7ffc453992ac items=0 ppid=4073 pid=4159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:58.949000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 22 00:44:58.952000 audit[4162]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_rule pid=4162 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:58.952000 audit[4162]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffe80cb120 a2=0 a3=7fffe80cb10c items=0 ppid=4073 pid=4162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:58.952000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 22 00:44:58.953000 audit[4163]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_chain pid=4163 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:58.953000 audit[4163]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff4264ea60 a2=0 a3=7fff4264ea4c items=0 ppid=4073 pid=4163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:58.953000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 22 00:44:58.956000 audit[4165]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=4165 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 22 00:44:58.956000 audit[4165]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc91478270 a2=0 a3=7ffc9147825c items=0 ppid=4073 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:58.956000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 22 00:44:58.982000 audit[4171]: NETFILTER_CFG table=filter:82 family=2 entries=8 op=nft_register_rule pid=4171 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:44:58.982000 audit[4171]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc881853f0 a2=0 a3=7ffc881853dc items=0 ppid=4073 pid=4171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:58.982000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:44:58.992000 audit[4171]: NETFILTER_CFG table=nat:83 family=2 entries=14 op=nft_register_chain pid=4171 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:44:58.992000 audit[4171]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffc881853f0 a2=0 a3=7ffc881853dc items=0 ppid=4073 pid=4171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:58.992000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:44:58.993000 audit[4176]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=4176 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:58.993000 audit[4176]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffffeb35300 a2=0 a3=7ffffeb352ec items=0 ppid=4073 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:58.993000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 22 00:44:58.996000 audit[4178]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=4178 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:58.996000 audit[4178]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffcd413b240 a2=0 a3=7ffcd413b22c items=0 ppid=4073 pid=4178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:58.996000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 22 00:44:58.999000 audit[4181]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=4181 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:58.999000 audit[4181]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe35c5dfd0 a2=0 a3=7ffe35c5dfbc items=0 ppid=4073 pid=4181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:58.999000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 22 00:44:59.000000 audit[4182]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=4182 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:59.000000 audit[4182]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9277fb80 a2=0 a3=7fff9277fb6c items=0 ppid=4073 pid=4182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:59.000000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 22 00:44:59.003000 audit[4184]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=4184 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:59.003000 audit[4184]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff8f50e4f0 a2=0 a3=7fff8f50e4dc items=0 ppid=4073 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:59.003000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 22 00:44:59.004000 audit[4185]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=4185 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:59.004000 audit[4185]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff003aba10 a2=0 a3=7fff003ab9fc items=0 ppid=4073 pid=4185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:59.004000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 22 00:44:59.006000 audit[4187]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=4187 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:59.006000 audit[4187]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd662f7f80 a2=0 a3=7ffd662f7f6c items=0 ppid=4073 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:59.006000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 22 00:44:59.009000 audit[4190]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=4190 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:59.009000 audit[4190]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffc50fa69c0 a2=0 a3=7ffc50fa69ac items=0 ppid=4073 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:59.009000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 22 00:44:59.010000 audit[4191]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=4191 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:59.010000 audit[4191]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd8fe19fb0 a2=0 a3=7ffd8fe19f9c items=0 ppid=4073 pid=4191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:59.010000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 22 00:44:59.012000 audit[4193]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=4193 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:59.012000 audit[4193]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe177bb050 a2=0 a3=7ffe177bb03c items=0 ppid=4073 pid=4193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:59.012000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 22 00:44:59.013000 audit[4194]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=4194 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:59.013000 audit[4194]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffdd5cc020 a2=0 a3=7fffdd5cc00c items=0 ppid=4073 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:59.013000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 22 00:44:59.016000 audit[4196]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=4196 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:59.016000 audit[4196]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcbee3cf70 a2=0 a3=7ffcbee3cf5c items=0 ppid=4073 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:59.016000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 22 00:44:59.019000 audit[4199]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=4199 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:59.019000 audit[4199]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe1cc69780 a2=0 a3=7ffe1cc6976c items=0 ppid=4073 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:59.019000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 22 00:44:59.022000 audit[4202]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=4202 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:59.022000 audit[4202]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe825871d0 a2=0 a3=7ffe825871bc items=0 ppid=4073 pid=4202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:59.022000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 22 00:44:59.023000 audit[4203]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=4203 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:59.023000 audit[4203]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcee0a8f00 a2=0 a3=7ffcee0a8eec items=0 ppid=4073 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:59.023000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 22 00:44:59.025000 audit[4205]: NETFILTER_CFG table=nat:99 family=10 entries=1 op=nft_register_rule pid=4205 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:59.025000 audit[4205]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd3dcdb0e0 a2=0 a3=7ffd3dcdb0cc items=0 ppid=4073 pid=4205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:59.025000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 22 00:44:59.028000 audit[4208]: NETFILTER_CFG table=nat:100 family=10 entries=1 op=nft_register_rule pid=4208 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:59.028000 audit[4208]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcf319a8d0 a2=0 a3=7ffcf319a8bc items=0 ppid=4073 pid=4208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:59.028000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 22 00:44:59.029000 audit[4209]: NETFILTER_CFG table=nat:101 family=10 entries=1 op=nft_register_chain pid=4209 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:59.029000 audit[4209]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff38149260 a2=0 a3=7fff3814924c items=0 ppid=4073 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:59.029000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 22 00:44:59.031000 audit[4211]: NETFILTER_CFG table=nat:102 family=10 entries=2 op=nft_register_chain pid=4211 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:59.031000 audit[4211]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffcb8d28170 a2=0 a3=7ffcb8d2815c items=0 ppid=4073 pid=4211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:59.031000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 22 00:44:59.032000 audit[4212]: NETFILTER_CFG table=filter:103 family=10 entries=1 op=nft_register_chain pid=4212 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:59.032000 audit[4212]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffa6234010 a2=0 a3=7fffa6233ffc items=0 ppid=4073 pid=4212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:59.032000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 22 00:44:59.034000 audit[4214]: NETFILTER_CFG table=filter:104 family=10 entries=1 op=nft_register_rule pid=4214 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:59.034000 audit[4214]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff27b6e190 a2=0 a3=7fff27b6e17c items=0 ppid=4073 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:59.034000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 22 00:44:59.037000 audit[4217]: NETFILTER_CFG table=filter:105 family=10 entries=1 op=nft_register_rule pid=4217 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 22 00:44:59.037000 audit[4217]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffce18bd4b0 a2=0 a3=7ffce18bd49c items=0 ppid=4073 pid=4217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:59.037000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 22 00:44:59.040000 audit[4219]: NETFILTER_CFG table=filter:106 family=10 entries=3 op=nft_register_rule pid=4219 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 22 00:44:59.040000 audit[4219]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffe9d68a980 a2=0 a3=7ffe9d68a96c items=0 ppid=4073 pid=4219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:59.040000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:44:59.040000 audit[4219]: NETFILTER_CFG table=nat:107 family=10 entries=7 op=nft_register_chain pid=4219 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 22 00:44:59.040000 audit[4219]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffe9d68a980 a2=0 a3=7ffe9d68a96c items=0 ppid=4073 pid=4219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:44:59.040000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:44:59.259401 kubelet[3924]: I0122 00:44:59.259060 3924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qlzjl" podStartSLOduration=3.259039317 podStartE2EDuration="3.259039317s" podCreationTimestamp="2026-01-22 00:44:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 00:44:59.258510692 +0000 UTC m=+8.164744389" watchObservedRunningTime="2026-01-22 00:44:59.259039317 +0000 UTC m=+8.165273007" Jan 22 00:44:59.633461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3819086562.mount: Deactivated successfully. Jan 22 00:45:00.107307 containerd[2460]: time="2026-01-22T00:45:00.107260671Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:45:00.109919 containerd[2460]: time="2026-01-22T00:45:00.109848992Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=0" Jan 22 00:45:00.112680 containerd[2460]: time="2026-01-22T00:45:00.112638628Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:45:00.115926 containerd[2460]: time="2026-01-22T00:45:00.115883499Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:45:00.116488 containerd[2460]: time="2026-01-22T00:45:00.116385200Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.688148989s" Jan 22 00:45:00.116488 containerd[2460]: time="2026-01-22T00:45:00.116413183Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 22 00:45:00.119361 containerd[2460]: time="2026-01-22T00:45:00.119334295Z" level=info msg="CreateContainer within sandbox \"f8fce88153c1ff329ff0cd85b94740ac8831c21a62f7f94260b98fe1142d170b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 22 00:45:00.233244 containerd[2460]: time="2026-01-22T00:45:00.233150503Z" level=info msg="Container 34a6abdfd0ff76e8408cbdbdb059cdeb1aac15f8cacdeedbf7d1f8360fdbd654: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:45:00.259620 containerd[2460]: time="2026-01-22T00:45:00.259587209Z" level=info msg="CreateContainer within sandbox \"f8fce88153c1ff329ff0cd85b94740ac8831c21a62f7f94260b98fe1142d170b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"34a6abdfd0ff76e8408cbdbdb059cdeb1aac15f8cacdeedbf7d1f8360fdbd654\"" Jan 22 00:45:00.260070 containerd[2460]: time="2026-01-22T00:45:00.260042965Z" level=info msg="StartContainer for \"34a6abdfd0ff76e8408cbdbdb059cdeb1aac15f8cacdeedbf7d1f8360fdbd654\"" Jan 22 00:45:00.261063 containerd[2460]: time="2026-01-22T00:45:00.261032793Z" level=info msg="connecting to shim 34a6abdfd0ff76e8408cbdbdb059cdeb1aac15f8cacdeedbf7d1f8360fdbd654" address="unix:///run/containerd/s/c88d39994924b04bd54a3fe750cab2ee197088798fd817ebdc345962f17bcb58" protocol=ttrpc version=3 Jan 22 00:45:00.280892 systemd[1]: Started cri-containerd-34a6abdfd0ff76e8408cbdbdb059cdeb1aac15f8cacdeedbf7d1f8360fdbd654.scope - libcontainer container 34a6abdfd0ff76e8408cbdbdb059cdeb1aac15f8cacdeedbf7d1f8360fdbd654. Jan 22 00:45:00.288000 audit: BPF prog-id=169 op=LOAD Jan 22 00:45:00.289000 audit: BPF prog-id=170 op=LOAD Jan 22 00:45:00.289000 audit[4228]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3978 pid=4228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:00.289000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334613661626466643066663736653834303863626462646230353963 Jan 22 00:45:00.289000 audit: BPF prog-id=170 op=UNLOAD Jan 22 00:45:00.289000 audit[4228]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3978 pid=4228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:00.289000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334613661626466643066663736653834303863626462646230353963 Jan 22 00:45:00.289000 audit: BPF prog-id=171 op=LOAD Jan 22 00:45:00.289000 audit[4228]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3978 pid=4228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:00.289000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334613661626466643066663736653834303863626462646230353963 Jan 22 00:45:00.289000 audit: BPF prog-id=172 op=LOAD Jan 22 00:45:00.289000 audit[4228]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3978 pid=4228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:00.289000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334613661626466643066663736653834303863626462646230353963 Jan 22 00:45:00.289000 audit: BPF prog-id=172 op=UNLOAD Jan 22 00:45:00.289000 audit[4228]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3978 pid=4228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:00.289000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334613661626466643066663736653834303863626462646230353963 Jan 22 00:45:00.289000 audit: BPF prog-id=171 op=UNLOAD Jan 22 00:45:00.289000 audit[4228]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3978 pid=4228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:00.289000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334613661626466643066663736653834303863626462646230353963 Jan 22 00:45:00.289000 audit: BPF prog-id=173 op=LOAD Jan 22 00:45:00.289000 audit[4228]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3978 pid=4228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:00.289000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334613661626466643066663736653834303863626462646230353963 Jan 22 00:45:00.308719 containerd[2460]: time="2026-01-22T00:45:00.308669153Z" level=info msg="StartContainer for \"34a6abdfd0ff76e8408cbdbdb059cdeb1aac15f8cacdeedbf7d1f8360fdbd654\" returns successfully" Jan 22 00:45:01.251024 kubelet[3924]: I0122 00:45:01.250819 3924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-7qqls" podStartSLOduration=2.561068574 podStartE2EDuration="5.250802569s" podCreationTimestamp="2026-01-22 00:44:56 +0000 UTC" firstStartedPulling="2026-01-22 00:44:57.427467142 +0000 UTC m=+6.333700823" lastFinishedPulling="2026-01-22 00:45:00.117201126 +0000 UTC m=+9.023434818" observedRunningTime="2026-01-22 00:45:01.250719527 +0000 UTC m=+10.156953221" watchObservedRunningTime="2026-01-22 00:45:01.250802569 +0000 UTC m=+10.157036262" Jan 22 00:45:05.891715 sudo[2951]: pam_unix(sudo:session): session closed for user root Jan 22 00:45:05.890000 audit[2951]: USER_END pid=2951 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:45:05.897680 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 22 00:45:05.897833 kernel: audit: type=1106 audit(1769042705.890:529): pid=2951 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:45:05.896000 audit[2951]: CRED_DISP pid=2951 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:45:05.904771 kernel: audit: type=1104 audit(1769042705.896:530): pid=2951 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 22 00:45:06.005231 sshd[2950]: Connection closed by 10.200.16.10 port 42136 Jan 22 00:45:06.005126 sshd-session[2947]: pam_unix(sshd:session): session closed for user core Jan 22 00:45:06.014070 kernel: audit: type=1106 audit(1769042706.006:531): pid=2947 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:45:06.006000 audit[2947]: USER_END pid=2947 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:45:06.013839 systemd[1]: sshd@6-10.200.8.28:22-10.200.16.10:42136.service: Deactivated successfully. Jan 22 00:45:06.006000 audit[2947]: CRED_DISP pid=2947 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:45:06.021113 kernel: audit: type=1104 audit(1769042706.006:532): pid=2947 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:45:06.022537 systemd[1]: session-9.scope: Deactivated successfully. Jan 22 00:45:06.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.28:22-10.200.16.10:42136 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:45:06.022960 systemd[1]: session-9.scope: Consumed 3.342s CPU time, 227.1M memory peak. Jan 22 00:45:06.027757 kernel: audit: type=1131 audit(1769042706.012:533): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.28:22-10.200.16.10:42136 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:45:06.030195 systemd-logind[2425]: Session 9 logged out. Waiting for processes to exit. Jan 22 00:45:06.031644 systemd-logind[2425]: Removed session 9. Jan 22 00:45:07.178772 kernel: audit: type=1325 audit(1769042707.174:534): table=filter:108 family=2 entries=15 op=nft_register_rule pid=4311 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:45:07.174000 audit[4311]: NETFILTER_CFG table=filter:108 family=2 entries=15 op=nft_register_rule pid=4311 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:45:07.174000 audit[4311]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff4a3c52a0 a2=0 a3=7fff4a3c528c items=0 ppid=4073 pid=4311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:07.185800 kernel: audit: type=1300 audit(1769042707.174:534): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff4a3c52a0 a2=0 a3=7fff4a3c528c items=0 ppid=4073 pid=4311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:07.174000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:45:07.192760 kernel: audit: type=1327 audit(1769042707.174:534): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:45:07.182000 audit[4311]: NETFILTER_CFG table=nat:109 family=2 entries=12 op=nft_register_rule pid=4311 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:45:07.198760 kernel: audit: type=1325 audit(1769042707.182:535): table=nat:109 family=2 entries=12 op=nft_register_rule pid=4311 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:45:07.182000 audit[4311]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff4a3c52a0 a2=0 a3=0 items=0 ppid=4073 pid=4311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:07.208781 kernel: audit: type=1300 audit(1769042707.182:535): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff4a3c52a0 a2=0 a3=0 items=0 ppid=4073 pid=4311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:07.182000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:45:07.208000 audit[4313]: NETFILTER_CFG table=filter:110 family=2 entries=16 op=nft_register_rule pid=4313 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:45:07.208000 audit[4313]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff31184be0 a2=0 a3=7fff31184bcc items=0 ppid=4073 pid=4313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:07.208000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:45:07.212000 audit[4313]: NETFILTER_CFG table=nat:111 family=2 entries=12 op=nft_register_rule pid=4313 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:45:07.212000 audit[4313]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff31184be0 a2=0 a3=0 items=0 ppid=4073 pid=4313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:07.212000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:45:09.252000 audit[4315]: NETFILTER_CFG table=filter:112 family=2 entries=17 op=nft_register_rule pid=4315 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:45:09.252000 audit[4315]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fffa862a4f0 a2=0 a3=7fffa862a4dc items=0 ppid=4073 pid=4315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:09.252000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:45:09.261000 audit[4315]: NETFILTER_CFG table=nat:113 family=2 entries=12 op=nft_register_rule pid=4315 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:45:09.261000 audit[4315]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffa862a4f0 a2=0 a3=0 items=0 ppid=4073 pid=4315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:09.261000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:45:09.285000 audit[4317]: NETFILTER_CFG table=filter:114 family=2 entries=18 op=nft_register_rule pid=4317 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:45:09.285000 audit[4317]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff6793fb70 a2=0 a3=7fff6793fb5c items=0 ppid=4073 pid=4317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:09.285000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:45:09.291000 audit[4317]: NETFILTER_CFG table=nat:115 family=2 entries=12 op=nft_register_rule pid=4317 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:45:09.291000 audit[4317]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff6793fb70 a2=0 a3=0 items=0 ppid=4073 pid=4317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:09.291000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:45:10.346000 audit[4319]: NETFILTER_CFG table=filter:116 family=2 entries=19 op=nft_register_rule pid=4319 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:45:10.346000 audit[4319]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffadef3410 a2=0 a3=7fffadef33fc items=0 ppid=4073 pid=4319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:10.346000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:45:10.350000 audit[4319]: NETFILTER_CFG table=nat:117 family=2 entries=12 op=nft_register_rule pid=4319 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:45:10.350000 audit[4319]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffadef3410 a2=0 a3=0 items=0 ppid=4073 pid=4319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:10.350000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:45:10.826149 systemd[1]: Created slice kubepods-besteffort-pod16e64268_4537_4e39_9808_d799e9e81037.slice - libcontainer container kubepods-besteffort-pod16e64268_4537_4e39_9808_d799e9e81037.slice. Jan 22 00:45:10.892602 kubelet[3924]: I0122 00:45:10.892567 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16e64268-4537-4e39-9808-d799e9e81037-tigera-ca-bundle\") pod \"calico-typha-7b654f79cf-ks8lc\" (UID: \"16e64268-4537-4e39-9808-d799e9e81037\") " pod="calico-system/calico-typha-7b654f79cf-ks8lc" Jan 22 00:45:10.892956 kubelet[3924]: I0122 00:45:10.892660 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wvfc\" (UniqueName: \"kubernetes.io/projected/16e64268-4537-4e39-9808-d799e9e81037-kube-api-access-7wvfc\") pod \"calico-typha-7b654f79cf-ks8lc\" (UID: \"16e64268-4537-4e39-9808-d799e9e81037\") " pod="calico-system/calico-typha-7b654f79cf-ks8lc" Jan 22 00:45:10.892956 kubelet[3924]: I0122 00:45:10.892682 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/16e64268-4537-4e39-9808-d799e9e81037-typha-certs\") pod \"calico-typha-7b654f79cf-ks8lc\" (UID: \"16e64268-4537-4e39-9808-d799e9e81037\") " pod="calico-system/calico-typha-7b654f79cf-ks8lc" Jan 22 00:45:11.057059 systemd[1]: Created slice kubepods-besteffort-pod78743a38_d4b5_4927_8e5e_4f2cc015a03f.slice - libcontainer container kubepods-besteffort-pod78743a38_d4b5_4927_8e5e_4f2cc015a03f.slice. Jan 22 00:45:11.094161 kubelet[3924]: I0122 00:45:11.093669 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/78743a38-d4b5-4927-8e5e-4f2cc015a03f-cni-bin-dir\") pod \"calico-node-nrfzq\" (UID: \"78743a38-d4b5-4927-8e5e-4f2cc015a03f\") " pod="calico-system/calico-node-nrfzq" Jan 22 00:45:11.094161 kubelet[3924]: I0122 00:45:11.093854 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78743a38-d4b5-4927-8e5e-4f2cc015a03f-tigera-ca-bundle\") pod \"calico-node-nrfzq\" (UID: \"78743a38-d4b5-4927-8e5e-4f2cc015a03f\") " pod="calico-system/calico-node-nrfzq" Jan 22 00:45:11.094161 kubelet[3924]: I0122 00:45:11.093885 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/78743a38-d4b5-4927-8e5e-4f2cc015a03f-policysync\") pod \"calico-node-nrfzq\" (UID: \"78743a38-d4b5-4927-8e5e-4f2cc015a03f\") " pod="calico-system/calico-node-nrfzq" Jan 22 00:45:11.094161 kubelet[3924]: I0122 00:45:11.093904 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/78743a38-d4b5-4927-8e5e-4f2cc015a03f-cni-net-dir\") pod \"calico-node-nrfzq\" (UID: \"78743a38-d4b5-4927-8e5e-4f2cc015a03f\") " pod="calico-system/calico-node-nrfzq" Jan 22 00:45:11.094161 kubelet[3924]: I0122 00:45:11.093920 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/78743a38-d4b5-4927-8e5e-4f2cc015a03f-node-certs\") pod \"calico-node-nrfzq\" (UID: \"78743a38-d4b5-4927-8e5e-4f2cc015a03f\") " pod="calico-system/calico-node-nrfzq" Jan 22 00:45:11.094384 kubelet[3924]: I0122 00:45:11.093939 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78743a38-d4b5-4927-8e5e-4f2cc015a03f-xtables-lock\") pod \"calico-node-nrfzq\" (UID: \"78743a38-d4b5-4927-8e5e-4f2cc015a03f\") " pod="calico-system/calico-node-nrfzq" Jan 22 00:45:11.094384 kubelet[3924]: I0122 00:45:11.093959 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2nrz\" (UniqueName: \"kubernetes.io/projected/78743a38-d4b5-4927-8e5e-4f2cc015a03f-kube-api-access-d2nrz\") pod \"calico-node-nrfzq\" (UID: \"78743a38-d4b5-4927-8e5e-4f2cc015a03f\") " pod="calico-system/calico-node-nrfzq" Jan 22 00:45:11.094384 kubelet[3924]: I0122 00:45:11.093980 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/78743a38-d4b5-4927-8e5e-4f2cc015a03f-var-lib-calico\") pod \"calico-node-nrfzq\" (UID: \"78743a38-d4b5-4927-8e5e-4f2cc015a03f\") " pod="calico-system/calico-node-nrfzq" Jan 22 00:45:11.094384 kubelet[3924]: I0122 00:45:11.094000 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/78743a38-d4b5-4927-8e5e-4f2cc015a03f-cni-log-dir\") pod \"calico-node-nrfzq\" (UID: \"78743a38-d4b5-4927-8e5e-4f2cc015a03f\") " pod="calico-system/calico-node-nrfzq" Jan 22 00:45:11.094384 kubelet[3924]: I0122 00:45:11.094024 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/78743a38-d4b5-4927-8e5e-4f2cc015a03f-flexvol-driver-host\") pod \"calico-node-nrfzq\" (UID: \"78743a38-d4b5-4927-8e5e-4f2cc015a03f\") " pod="calico-system/calico-node-nrfzq" Jan 22 00:45:11.094500 kubelet[3924]: I0122 00:45:11.094043 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78743a38-d4b5-4927-8e5e-4f2cc015a03f-lib-modules\") pod \"calico-node-nrfzq\" (UID: \"78743a38-d4b5-4927-8e5e-4f2cc015a03f\") " pod="calico-system/calico-node-nrfzq" Jan 22 00:45:11.094500 kubelet[3924]: I0122 00:45:11.094060 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/78743a38-d4b5-4927-8e5e-4f2cc015a03f-var-run-calico\") pod \"calico-node-nrfzq\" (UID: \"78743a38-d4b5-4927-8e5e-4f2cc015a03f\") " pod="calico-system/calico-node-nrfzq" Jan 22 00:45:11.131515 containerd[2460]: time="2026-01-22T00:45:11.131473383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b654f79cf-ks8lc,Uid:16e64268-4537-4e39-9808-d799e9e81037,Namespace:calico-system,Attempt:0,}" Jan 22 00:45:11.174634 containerd[2460]: time="2026-01-22T00:45:11.174444653Z" level=info msg="connecting to shim 3425648bf1fd71565614e49141b265b0cd45aff0f8423240b3272c975abba3eb" address="unix:///run/containerd/s/7f5b055e79af9759ec0ba8a370d2ec2e36003c2a04b75edb58f96c2061bf2f02" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:45:11.196937 systemd[1]: Started cri-containerd-3425648bf1fd71565614e49141b265b0cd45aff0f8423240b3272c975abba3eb.scope - libcontainer container 3425648bf1fd71565614e49141b265b0cd45aff0f8423240b3272c975abba3eb. Jan 22 00:45:11.203138 kubelet[3924]: E0122 00:45:11.203109 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.203810 kubelet[3924]: W0122 00:45:11.203360 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.203810 kubelet[3924]: E0122 00:45:11.203393 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.212586 kubelet[3924]: E0122 00:45:11.212527 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.212586 kubelet[3924]: W0122 00:45:11.212542 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.212586 kubelet[3924]: E0122 00:45:11.212558 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.222000 audit: BPF prog-id=174 op=LOAD Jan 22 00:45:11.224676 kernel: kauditd_printk_skb: 25 callbacks suppressed Jan 22 00:45:11.224758 kernel: audit: type=1334 audit(1769042711.222:544): prog-id=174 op=LOAD Jan 22 00:45:11.226000 audit: BPF prog-id=175 op=LOAD Jan 22 00:45:11.226000 audit[4342]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4331 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:11.237798 kernel: audit: type=1334 audit(1769042711.226:545): prog-id=175 op=LOAD Jan 22 00:45:11.237866 kernel: audit: type=1300 audit(1769042711.226:545): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4331 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:11.245047 kernel: audit: type=1327 audit(1769042711.226:545): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334323536343862663166643731353635363134653439313431623236 Jan 22 00:45:11.226000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334323536343862663166643731353635363134653439313431623236 Jan 22 00:45:11.226000 audit: BPF prog-id=175 op=UNLOAD Jan 22 00:45:11.255608 kernel: audit: type=1334 audit(1769042711.226:546): prog-id=175 op=UNLOAD Jan 22 00:45:11.255667 kernel: audit: type=1300 audit(1769042711.226:546): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4331 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:11.226000 audit[4342]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4331 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:11.226000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334323536343862663166643731353635363134653439313431623236 Jan 22 00:45:11.264519 kubelet[3924]: E0122 00:45:11.257538 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-78h9h" podUID="222ac10e-a19c-48d7-ba2a-f1cdbf34cf86" Jan 22 00:45:11.265606 kernel: audit: type=1327 audit(1769042711.226:546): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334323536343862663166643731353635363134653439313431623236 Jan 22 00:45:11.226000 audit: BPF prog-id=176 op=LOAD Jan 22 00:45:11.283587 kernel: audit: type=1334 audit(1769042711.226:547): prog-id=176 op=LOAD Jan 22 00:45:11.283663 kernel: audit: type=1300 audit(1769042711.226:547): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4331 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:11.226000 audit[4342]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4331 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:11.226000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334323536343862663166643731353635363134653439313431623236 Jan 22 00:45:11.292427 kubelet[3924]: E0122 00:45:11.287802 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.292427 kubelet[3924]: W0122 00:45:11.287819 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.292427 kubelet[3924]: E0122 00:45:11.287837 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.292427 kubelet[3924]: E0122 00:45:11.287975 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.292427 kubelet[3924]: W0122 00:45:11.287981 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.292427 kubelet[3924]: E0122 00:45:11.287989 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.292427 kubelet[3924]: E0122 00:45:11.288086 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.292427 kubelet[3924]: W0122 00:45:11.288091 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.292427 kubelet[3924]: E0122 00:45:11.288097 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.292427 kubelet[3924]: E0122 00:45:11.288237 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.292757 kernel: audit: type=1327 audit(1769042711.226:547): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334323536343862663166643731353635363134653439313431623236 Jan 22 00:45:11.292790 kubelet[3924]: W0122 00:45:11.288242 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.292790 kubelet[3924]: E0122 00:45:11.288249 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.292790 kubelet[3924]: E0122 00:45:11.288353 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.292790 kubelet[3924]: W0122 00:45:11.288358 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.292790 kubelet[3924]: E0122 00:45:11.288364 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.292790 kubelet[3924]: E0122 00:45:11.288449 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.292790 kubelet[3924]: W0122 00:45:11.288453 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.292790 kubelet[3924]: E0122 00:45:11.288459 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.292790 kubelet[3924]: E0122 00:45:11.288544 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.292790 kubelet[3924]: W0122 00:45:11.288558 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.293020 kubelet[3924]: E0122 00:45:11.288564 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.293020 kubelet[3924]: E0122 00:45:11.288656 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.293020 kubelet[3924]: W0122 00:45:11.288661 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.293020 kubelet[3924]: E0122 00:45:11.288666 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.293020 kubelet[3924]: E0122 00:45:11.288769 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.293020 kubelet[3924]: W0122 00:45:11.288775 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.293020 kubelet[3924]: E0122 00:45:11.288781 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.293020 kubelet[3924]: E0122 00:45:11.288868 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.293020 kubelet[3924]: W0122 00:45:11.288873 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.293020 kubelet[3924]: E0122 00:45:11.288878 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.293240 kubelet[3924]: E0122 00:45:11.288961 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.293240 kubelet[3924]: W0122 00:45:11.288966 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.293240 kubelet[3924]: E0122 00:45:11.288971 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.293240 kubelet[3924]: E0122 00:45:11.289058 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.293240 kubelet[3924]: W0122 00:45:11.289064 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.293240 kubelet[3924]: E0122 00:45:11.289070 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.293240 kubelet[3924]: E0122 00:45:11.289164 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.293240 kubelet[3924]: W0122 00:45:11.289169 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.293240 kubelet[3924]: E0122 00:45:11.289175 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.293240 kubelet[3924]: E0122 00:45:11.289259 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.293470 kubelet[3924]: W0122 00:45:11.289264 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.293470 kubelet[3924]: E0122 00:45:11.289269 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.293470 kubelet[3924]: E0122 00:45:11.289353 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.293470 kubelet[3924]: W0122 00:45:11.289358 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.293470 kubelet[3924]: E0122 00:45:11.289363 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.293470 kubelet[3924]: E0122 00:45:11.290853 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.293470 kubelet[3924]: W0122 00:45:11.290865 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.293470 kubelet[3924]: E0122 00:45:11.290878 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.293470 kubelet[3924]: E0122 00:45:11.291018 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.293470 kubelet[3924]: W0122 00:45:11.291024 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.293797 kubelet[3924]: E0122 00:45:11.291031 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.293797 kubelet[3924]: E0122 00:45:11.291131 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.293797 kubelet[3924]: W0122 00:45:11.291135 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.293797 kubelet[3924]: E0122 00:45:11.291141 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.293797 kubelet[3924]: E0122 00:45:11.291225 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.293797 kubelet[3924]: W0122 00:45:11.291230 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.293797 kubelet[3924]: E0122 00:45:11.291235 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.293797 kubelet[3924]: E0122 00:45:11.291319 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.293797 kubelet[3924]: W0122 00:45:11.291324 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.293797 kubelet[3924]: E0122 00:45:11.291329 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.226000 audit: BPF prog-id=177 op=LOAD Jan 22 00:45:11.226000 audit[4342]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4331 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:11.226000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334323536343862663166643731353635363134653439313431623236 Jan 22 00:45:11.226000 audit: BPF prog-id=177 op=UNLOAD Jan 22 00:45:11.226000 audit[4342]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4331 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:11.226000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334323536343862663166643731353635363134653439313431623236 Jan 22 00:45:11.226000 audit: BPF prog-id=176 op=UNLOAD Jan 22 00:45:11.226000 audit[4342]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4331 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:11.226000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334323536343862663166643731353635363134653439313431623236 Jan 22 00:45:11.226000 audit: BPF prog-id=178 op=LOAD Jan 22 00:45:11.226000 audit[4342]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4331 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:11.226000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334323536343862663166643731353635363134653439313431623236 Jan 22 00:45:11.295799 kubelet[3924]: E0122 00:45:11.295371 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.295799 kubelet[3924]: W0122 00:45:11.295386 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.295799 kubelet[3924]: E0122 00:45:11.295400 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.295799 kubelet[3924]: I0122 00:45:11.295426 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdmdn\" (UniqueName: \"kubernetes.io/projected/222ac10e-a19c-48d7-ba2a-f1cdbf34cf86-kube-api-access-bdmdn\") pod \"csi-node-driver-78h9h\" (UID: \"222ac10e-a19c-48d7-ba2a-f1cdbf34cf86\") " pod="calico-system/csi-node-driver-78h9h" Jan 22 00:45:11.295799 kubelet[3924]: E0122 00:45:11.295553 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.295799 kubelet[3924]: W0122 00:45:11.295559 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.295799 kubelet[3924]: E0122 00:45:11.295567 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.295799 kubelet[3924]: E0122 00:45:11.295681 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.295799 kubelet[3924]: W0122 00:45:11.295686 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.296096 kubelet[3924]: E0122 00:45:11.295692 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.296096 kubelet[3924]: I0122 00:45:11.295582 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/222ac10e-a19c-48d7-ba2a-f1cdbf34cf86-varrun\") pod \"csi-node-driver-78h9h\" (UID: \"222ac10e-a19c-48d7-ba2a-f1cdbf34cf86\") " pod="calico-system/csi-node-driver-78h9h" Jan 22 00:45:11.296096 kubelet[3924]: E0122 00:45:11.295877 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.296096 kubelet[3924]: W0122 00:45:11.295882 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.296096 kubelet[3924]: E0122 00:45:11.295889 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.296096 kubelet[3924]: E0122 00:45:11.296039 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.296096 kubelet[3924]: W0122 00:45:11.296045 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.296096 kubelet[3924]: E0122 00:45:11.296059 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.296304 kubelet[3924]: E0122 00:45:11.296168 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.296304 kubelet[3924]: W0122 00:45:11.296173 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.296304 kubelet[3924]: E0122 00:45:11.296186 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.296304 kubelet[3924]: E0122 00:45:11.296286 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.296304 kubelet[3924]: W0122 00:45:11.296292 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.296304 kubelet[3924]: E0122 00:45:11.296298 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.296454 kubelet[3924]: I0122 00:45:11.296321 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/222ac10e-a19c-48d7-ba2a-f1cdbf34cf86-kubelet-dir\") pod \"csi-node-driver-78h9h\" (UID: \"222ac10e-a19c-48d7-ba2a-f1cdbf34cf86\") " pod="calico-system/csi-node-driver-78h9h" Jan 22 00:45:11.296454 kubelet[3924]: E0122 00:45:11.296434 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.296454 kubelet[3924]: W0122 00:45:11.296440 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.296522 kubelet[3924]: E0122 00:45:11.296454 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.296522 kubelet[3924]: I0122 00:45:11.296487 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/222ac10e-a19c-48d7-ba2a-f1cdbf34cf86-registration-dir\") pod \"csi-node-driver-78h9h\" (UID: \"222ac10e-a19c-48d7-ba2a-f1cdbf34cf86\") " pod="calico-system/csi-node-driver-78h9h" Jan 22 00:45:11.296755 kubelet[3924]: E0122 00:45:11.296629 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.296755 kubelet[3924]: W0122 00:45:11.296638 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.296755 kubelet[3924]: E0122 00:45:11.296660 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.296755 kubelet[3924]: I0122 00:45:11.296676 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/222ac10e-a19c-48d7-ba2a-f1cdbf34cf86-socket-dir\") pod \"csi-node-driver-78h9h\" (UID: \"222ac10e-a19c-48d7-ba2a-f1cdbf34cf86\") " pod="calico-system/csi-node-driver-78h9h" Jan 22 00:45:11.296876 kubelet[3924]: E0122 00:45:11.296846 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.296876 kubelet[3924]: W0122 00:45:11.296853 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.296876 kubelet[3924]: E0122 00:45:11.296863 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.297427 kubelet[3924]: E0122 00:45:11.296974 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.297427 kubelet[3924]: W0122 00:45:11.296981 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.297427 kubelet[3924]: E0122 00:45:11.296994 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.297427 kubelet[3924]: E0122 00:45:11.297095 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.297427 kubelet[3924]: W0122 00:45:11.297099 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.297427 kubelet[3924]: E0122 00:45:11.297111 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.297427 kubelet[3924]: E0122 00:45:11.297198 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.297427 kubelet[3924]: W0122 00:45:11.297202 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.297427 kubelet[3924]: E0122 00:45:11.297213 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.297427 kubelet[3924]: E0122 00:45:11.297300 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.297680 kubelet[3924]: W0122 00:45:11.297305 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.297680 kubelet[3924]: E0122 00:45:11.297310 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.297680 kubelet[3924]: E0122 00:45:11.297383 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.297680 kubelet[3924]: W0122 00:45:11.297388 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.297680 kubelet[3924]: E0122 00:45:11.297394 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.335493 containerd[2460]: time="2026-01-22T00:45:11.335460051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7b654f79cf-ks8lc,Uid:16e64268-4537-4e39-9808-d799e9e81037,Namespace:calico-system,Attempt:0,} returns sandbox id \"3425648bf1fd71565614e49141b265b0cd45aff0f8423240b3272c975abba3eb\"" Jan 22 00:45:11.339017 containerd[2460]: time="2026-01-22T00:45:11.338991014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 22 00:45:11.361718 containerd[2460]: time="2026-01-22T00:45:11.361638006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nrfzq,Uid:78743a38-d4b5-4927-8e5e-4f2cc015a03f,Namespace:calico-system,Attempt:0,}" Jan 22 00:45:11.361000 audit[4421]: NETFILTER_CFG table=filter:118 family=2 entries=21 op=nft_register_rule pid=4421 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:45:11.361000 audit[4421]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fffe17b9f50 a2=0 a3=7fffe17b9f3c items=0 ppid=4073 pid=4421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:11.361000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:45:11.366000 audit[4421]: NETFILTER_CFG table=nat:119 family=2 entries=12 op=nft_register_rule pid=4421 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:45:11.366000 audit[4421]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffe17b9f50 a2=0 a3=0 items=0 ppid=4073 pid=4421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:11.366000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:45:11.404559 kubelet[3924]: E0122 00:45:11.404306 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.404559 kubelet[3924]: W0122 00:45:11.404325 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.404559 kubelet[3924]: E0122 00:45:11.404342 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.404559 kubelet[3924]: E0122 00:45:11.404449 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.404559 kubelet[3924]: W0122 00:45:11.404454 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.404559 kubelet[3924]: E0122 00:45:11.404460 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.404559 kubelet[3924]: E0122 00:45:11.404549 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.404559 kubelet[3924]: W0122 00:45:11.404554 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.404559 kubelet[3924]: E0122 00:45:11.404563 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.404890 kubelet[3924]: E0122 00:45:11.404659 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.404890 kubelet[3924]: W0122 00:45:11.404664 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.404890 kubelet[3924]: E0122 00:45:11.404671 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.404890 kubelet[3924]: E0122 00:45:11.404782 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.404890 kubelet[3924]: W0122 00:45:11.404788 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.404890 kubelet[3924]: E0122 00:45:11.404794 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.405038 kubelet[3924]: E0122 00:45:11.404896 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.405038 kubelet[3924]: W0122 00:45:11.404901 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.405038 kubelet[3924]: E0122 00:45:11.404908 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.405038 kubelet[3924]: E0122 00:45:11.404986 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.405038 kubelet[3924]: W0122 00:45:11.404991 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.405038 kubelet[3924]: E0122 00:45:11.404997 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.405168 kubelet[3924]: E0122 00:45:11.405074 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.405168 kubelet[3924]: W0122 00:45:11.405078 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.405168 kubelet[3924]: E0122 00:45:11.405084 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.405422 kubelet[3924]: E0122 00:45:11.405337 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.405422 kubelet[3924]: W0122 00:45:11.405351 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.405422 kubelet[3924]: E0122 00:45:11.405370 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.405530 kubelet[3924]: E0122 00:45:11.405524 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.405557 kubelet[3924]: W0122 00:45:11.405553 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.405589 kubelet[3924]: E0122 00:45:11.405583 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.405710 kubelet[3924]: E0122 00:45:11.405704 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.405760 kubelet[3924]: W0122 00:45:11.405754 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.405799 kubelet[3924]: E0122 00:45:11.405791 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.405954 kubelet[3924]: E0122 00:45:11.405944 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.405954 kubelet[3924]: W0122 00:45:11.405953 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.406006 kubelet[3924]: E0122 00:45:11.405968 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.406105 kubelet[3924]: E0122 00:45:11.406095 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.406105 kubelet[3924]: W0122 00:45:11.406103 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.406158 kubelet[3924]: E0122 00:45:11.406112 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.406223 kubelet[3924]: E0122 00:45:11.406213 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.406223 kubelet[3924]: W0122 00:45:11.406221 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.406300 kubelet[3924]: E0122 00:45:11.406231 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.406333 kubelet[3924]: E0122 00:45:11.406316 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.406333 kubelet[3924]: W0122 00:45:11.406321 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.406382 kubelet[3924]: E0122 00:45:11.406335 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.406437 kubelet[3924]: E0122 00:45:11.406426 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.406437 kubelet[3924]: W0122 00:45:11.406436 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.406566 kubelet[3924]: E0122 00:45:11.406554 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.406645 kubelet[3924]: E0122 00:45:11.406635 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.406645 kubelet[3924]: W0122 00:45:11.406642 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.406726 kubelet[3924]: E0122 00:45:11.406658 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.406784 kubelet[3924]: E0122 00:45:11.406768 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.406784 kubelet[3924]: W0122 00:45:11.406773 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.406845 kubelet[3924]: E0122 00:45:11.406835 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.406943 kubelet[3924]: E0122 00:45:11.406932 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.406943 kubelet[3924]: W0122 00:45:11.406940 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.407000 kubelet[3924]: E0122 00:45:11.406949 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.407081 kubelet[3924]: E0122 00:45:11.407072 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.407081 kubelet[3924]: W0122 00:45:11.407078 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.407135 kubelet[3924]: E0122 00:45:11.407090 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.407196 kubelet[3924]: E0122 00:45:11.407186 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.407196 kubelet[3924]: W0122 00:45:11.407193 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.407246 kubelet[3924]: E0122 00:45:11.407205 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.407331 kubelet[3924]: E0122 00:45:11.407321 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.407331 kubelet[3924]: W0122 00:45:11.407329 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.407382 kubelet[3924]: E0122 00:45:11.407337 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.407443 kubelet[3924]: E0122 00:45:11.407433 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.407443 kubelet[3924]: W0122 00:45:11.407440 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.407489 kubelet[3924]: E0122 00:45:11.407471 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.407956 kubelet[3924]: E0122 00:45:11.407766 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.407956 kubelet[3924]: W0122 00:45:11.407778 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.407956 kubelet[3924]: E0122 00:45:11.407800 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.408153 kubelet[3924]: E0122 00:45:11.408136 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.408153 kubelet[3924]: W0122 00:45:11.408150 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.408209 kubelet[3924]: E0122 00:45:11.408161 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.413838 kubelet[3924]: E0122 00:45:11.413823 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:11.413948 kubelet[3924]: W0122 00:45:11.413908 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:11.413948 kubelet[3924]: E0122 00:45:11.413926 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:11.420845 containerd[2460]: time="2026-01-22T00:45:11.420798893Z" level=info msg="connecting to shim 12335fab1516b5cc512a5cce8fcdd5c6c523767d4ad4ccaa067a4f643632cb68" address="unix:///run/containerd/s/d9670a4fd688224d4c2a739a472f35939022125da98a87a6e00afbeecc1806c1" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:45:11.445063 systemd[1]: Started cri-containerd-12335fab1516b5cc512a5cce8fcdd5c6c523767d4ad4ccaa067a4f643632cb68.scope - libcontainer container 12335fab1516b5cc512a5cce8fcdd5c6c523767d4ad4ccaa067a4f643632cb68. Jan 22 00:45:11.451000 audit: BPF prog-id=179 op=LOAD Jan 22 00:45:11.452000 audit: BPF prog-id=180 op=LOAD Jan 22 00:45:11.452000 audit[4470]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4458 pid=4470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:11.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132333335666162313531366235636335313261356363653866636464 Jan 22 00:45:11.452000 audit: BPF prog-id=180 op=UNLOAD Jan 22 00:45:11.452000 audit[4470]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4458 pid=4470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:11.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132333335666162313531366235636335313261356363653866636464 Jan 22 00:45:11.452000 audit: BPF prog-id=181 op=LOAD Jan 22 00:45:11.452000 audit[4470]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4458 pid=4470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:11.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132333335666162313531366235636335313261356363653866636464 Jan 22 00:45:11.452000 audit: BPF prog-id=182 op=LOAD Jan 22 00:45:11.452000 audit[4470]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4458 pid=4470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:11.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132333335666162313531366235636335313261356363653866636464 Jan 22 00:45:11.452000 audit: BPF prog-id=182 op=UNLOAD Jan 22 00:45:11.452000 audit[4470]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4458 pid=4470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:11.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132333335666162313531366235636335313261356363653866636464 Jan 22 00:45:11.452000 audit: BPF prog-id=181 op=UNLOAD Jan 22 00:45:11.452000 audit[4470]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4458 pid=4470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:11.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132333335666162313531366235636335313261356363653866636464 Jan 22 00:45:11.452000 audit: BPF prog-id=183 op=LOAD Jan 22 00:45:11.452000 audit[4470]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4458 pid=4470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:11.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132333335666162313531366235636335313261356363653866636464 Jan 22 00:45:11.466582 containerd[2460]: time="2026-01-22T00:45:11.466549380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nrfzq,Uid:78743a38-d4b5-4927-8e5e-4f2cc015a03f,Namespace:calico-system,Attempt:0,} returns sandbox id \"12335fab1516b5cc512a5cce8fcdd5c6c523767d4ad4ccaa067a4f643632cb68\"" Jan 22 00:45:12.700984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount201357280.mount: Deactivated successfully. Jan 22 00:45:13.185436 kubelet[3924]: E0122 00:45:13.184935 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-78h9h" podUID="222ac10e-a19c-48d7-ba2a-f1cdbf34cf86" Jan 22 00:45:13.721464 containerd[2460]: time="2026-01-22T00:45:13.721419426Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:45:13.724245 containerd[2460]: time="2026-01-22T00:45:13.724139838Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 22 00:45:13.726990 containerd[2460]: time="2026-01-22T00:45:13.726964826Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:45:13.730340 containerd[2460]: time="2026-01-22T00:45:13.730292200Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:45:13.730752 containerd[2460]: time="2026-01-22T00:45:13.730602372Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.391581255s" Jan 22 00:45:13.730752 containerd[2460]: time="2026-01-22T00:45:13.730630441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 22 00:45:13.731517 containerd[2460]: time="2026-01-22T00:45:13.731495431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 22 00:45:13.749848 containerd[2460]: time="2026-01-22T00:45:13.749780253Z" level=info msg="CreateContainer within sandbox \"3425648bf1fd71565614e49141b265b0cd45aff0f8423240b3272c975abba3eb\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 22 00:45:13.777967 containerd[2460]: time="2026-01-22T00:45:13.777936468Z" level=info msg="Container 16af7d1630da8ac01f34167e155fce6390490239d2ddb9e9a3270472e22c8a4b: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:45:13.781573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1309234164.mount: Deactivated successfully. Jan 22 00:45:13.795980 containerd[2460]: time="2026-01-22T00:45:13.795948025Z" level=info msg="CreateContainer within sandbox \"3425648bf1fd71565614e49141b265b0cd45aff0f8423240b3272c975abba3eb\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"16af7d1630da8ac01f34167e155fce6390490239d2ddb9e9a3270472e22c8a4b\"" Jan 22 00:45:13.796505 containerd[2460]: time="2026-01-22T00:45:13.796457399Z" level=info msg="StartContainer for \"16af7d1630da8ac01f34167e155fce6390490239d2ddb9e9a3270472e22c8a4b\"" Jan 22 00:45:13.797550 containerd[2460]: time="2026-01-22T00:45:13.797522464Z" level=info msg="connecting to shim 16af7d1630da8ac01f34167e155fce6390490239d2ddb9e9a3270472e22c8a4b" address="unix:///run/containerd/s/7f5b055e79af9759ec0ba8a370d2ec2e36003c2a04b75edb58f96c2061bf2f02" protocol=ttrpc version=3 Jan 22 00:45:13.825055 systemd[1]: Started cri-containerd-16af7d1630da8ac01f34167e155fce6390490239d2ddb9e9a3270472e22c8a4b.scope - libcontainer container 16af7d1630da8ac01f34167e155fce6390490239d2ddb9e9a3270472e22c8a4b. Jan 22 00:45:13.835000 audit: BPF prog-id=184 op=LOAD Jan 22 00:45:13.836000 audit: BPF prog-id=185 op=LOAD Jan 22 00:45:13.836000 audit[4505]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4331 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:13.836000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136616637643136333064613861633031663334313637653135356663 Jan 22 00:45:13.836000 audit: BPF prog-id=185 op=UNLOAD Jan 22 00:45:13.836000 audit[4505]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4331 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:13.836000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136616637643136333064613861633031663334313637653135356663 Jan 22 00:45:13.836000 audit: BPF prog-id=186 op=LOAD Jan 22 00:45:13.836000 audit[4505]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4331 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:13.836000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136616637643136333064613861633031663334313637653135356663 Jan 22 00:45:13.836000 audit: BPF prog-id=187 op=LOAD Jan 22 00:45:13.836000 audit[4505]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4331 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:13.836000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136616637643136333064613861633031663334313637653135356663 Jan 22 00:45:13.836000 audit: BPF prog-id=187 op=UNLOAD Jan 22 00:45:13.836000 audit[4505]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4331 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:13.836000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136616637643136333064613861633031663334313637653135356663 Jan 22 00:45:13.836000 audit: BPF prog-id=186 op=UNLOAD Jan 22 00:45:13.836000 audit[4505]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4331 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:13.836000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136616637643136333064613861633031663334313637653135356663 Jan 22 00:45:13.836000 audit: BPF prog-id=188 op=LOAD Jan 22 00:45:13.836000 audit[4505]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4331 pid=4505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:13.836000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3136616637643136333064613861633031663334313637653135356663 Jan 22 00:45:13.870968 containerd[2460]: time="2026-01-22T00:45:13.870939178Z" level=info msg="StartContainer for \"16af7d1630da8ac01f34167e155fce6390490239d2ddb9e9a3270472e22c8a4b\" returns successfully" Jan 22 00:45:14.278408 kubelet[3924]: I0122 00:45:14.278165 3924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7b654f79cf-ks8lc" podStartSLOduration=1.8851686330000001 podStartE2EDuration="4.278146625s" podCreationTimestamp="2026-01-22 00:45:10 +0000 UTC" firstStartedPulling="2026-01-22 00:45:11.338404818 +0000 UTC m=+20.244638511" lastFinishedPulling="2026-01-22 00:45:13.731382812 +0000 UTC m=+22.637616503" observedRunningTime="2026-01-22 00:45:14.277599392 +0000 UTC m=+23.183833084" watchObservedRunningTime="2026-01-22 00:45:14.278146625 +0000 UTC m=+23.184380315" Jan 22 00:45:14.311520 kubelet[3924]: E0122 00:45:14.311432 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:14.311520 kubelet[3924]: W0122 00:45:14.311457 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:14.311520 kubelet[3924]: E0122 00:45:14.311478 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:14.311939 kubelet[3924]: E0122 00:45:14.311877 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:14.311939 kubelet[3924]: W0122 00:45:14.311890 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:14.311939 kubelet[3924]: E0122 00:45:14.311904 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:14.312195 kubelet[3924]: E0122 00:45:14.312139 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:14.312195 kubelet[3924]: W0122 00:45:14.312155 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:14.312195 kubelet[3924]: E0122 00:45:14.312165 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:14.312470 kubelet[3924]: E0122 00:45:14.312417 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:14.312470 kubelet[3924]: W0122 00:45:14.312425 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:14.312470 kubelet[3924]: E0122 00:45:14.312433 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:14.312811 kubelet[3924]: E0122 00:45:14.312646 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:14.312811 kubelet[3924]: W0122 00:45:14.312766 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:14.312811 kubelet[3924]: E0122 00:45:14.312778 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:14.315131 kubelet[3924]: E0122 00:45:14.315056 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:14.315131 kubelet[3924]: W0122 00:45:14.315074 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:14.315131 kubelet[3924]: E0122 00:45:14.315090 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:14.315404 kubelet[3924]: E0122 00:45:14.315351 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:14.315404 kubelet[3924]: W0122 00:45:14.315362 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:14.315404 kubelet[3924]: E0122 00:45:14.315372 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:14.315652 kubelet[3924]: E0122 00:45:14.315596 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:14.315652 kubelet[3924]: W0122 00:45:14.315604 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:14.315652 kubelet[3924]: E0122 00:45:14.315612 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:14.315866 kubelet[3924]: E0122 00:45:14.315859 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:14.315941 kubelet[3924]: W0122 00:45:14.315903 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:14.315941 kubelet[3924]: E0122 00:45:14.315913 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:14.316132 kubelet[3924]: E0122 00:45:14.316092 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:14.316132 kubelet[3924]: W0122 00:45:14.316100 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:14.316132 kubelet[3924]: E0122 00:45:14.316107 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:14.316325 kubelet[3924]: E0122 00:45:14.316285 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:14.316325 kubelet[3924]: W0122 00:45:14.316293 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:14.316325 kubelet[3924]: E0122 00:45:14.316300 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:14.316536 kubelet[3924]: E0122 00:45:14.316483 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:14.316536 kubelet[3924]: W0122 00:45:14.316491 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:14.316536 kubelet[3924]: E0122 00:45:14.316498 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:14.316747 kubelet[3924]: E0122 00:45:14.316703 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:14.316747 kubelet[3924]: W0122 00:45:14.316712 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:14.316747 kubelet[3924]: E0122 00:45:14.316721 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:14.316973 kubelet[3924]: E0122 00:45:14.316933 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:14.316973 kubelet[3924]: W0122 00:45:14.316941 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:14.316973 kubelet[3924]: E0122 00:45:14.316949 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:14.317187 kubelet[3924]: E0122 00:45:14.317124 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:14.317187 kubelet[3924]: W0122 00:45:14.317131 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:14.317187 kubelet[3924]: E0122 00:45:14.317138 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:14.324472 kubelet[3924]: E0122 00:45:14.324420 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:14.324472 kubelet[3924]: W0122 00:45:14.324436 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:14.324472 kubelet[3924]: E0122 00:45:14.324450 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:14.324829 kubelet[3924]: E0122 00:45:14.324807 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:14.324829 kubelet[3924]: W0122 00:45:14.324817 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:14.324961 kubelet[3924]: E0122 00:45:14.324908 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:14.325113 kubelet[3924]: E0122 00:45:14.325107 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:14.325160 kubelet[3924]: W0122 00:45:14.325153 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:14.325253 kubelet[3924]: E0122 00:45:14.325199 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:14.325421 kubelet[3924]: E0122 00:45:14.325403 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:14.325421 kubelet[3924]: W0122 00:45:14.325414 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:14.325500 kubelet[3924]: E0122 00:45:14.325431 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:14.325592 kubelet[3924]: E0122 00:45:14.325580 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:14.325592 kubelet[3924]: W0122 00:45:14.325589 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:14.325649 kubelet[3924]: E0122 00:45:14.325605 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:14.325792 kubelet[3924]: E0122 00:45:14.325778 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:14.325824 kubelet[3924]: W0122 00:45:14.325791 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:14.325824 kubelet[3924]: E0122 00:45:14.325802 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:14.325974 kubelet[3924]: E0122 00:45:14.325946 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:14.325974 kubelet[3924]: W0122 00:45:14.325954 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:14.325974 kubelet[3924]: E0122 00:45:14.325962 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:14.326403 kubelet[3924]: E0122 00:45:14.326383 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:14.326403 kubelet[3924]: W0122 00:45:14.326402 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:14.326497 kubelet[3924]: E0122 00:45:14.326417 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:14.326567 kubelet[3924]: E0122 00:45:14.326553 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:14.326567 kubelet[3924]: W0122 00:45:14.326561 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:14.326681 kubelet[3924]: E0122 00:45:14.326568 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:14.326681 kubelet[3924]: E0122 00:45:14.326668 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:14.326681 kubelet[3924]: W0122 00:45:14.326675 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:14.326799 kubelet[3924]: E0122 00:45:14.326791 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:14.326846 kubelet[3924]: E0122 00:45:14.326831 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:14.326846 kubelet[3924]: W0122 00:45:14.326840 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:14.326947 kubelet[3924]: E0122 00:45:14.326919 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:14.326977 kubelet[3924]: E0122 00:45:14.326948 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:14.326977 kubelet[3924]: W0122 00:45:14.326954 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:14.326977 kubelet[3924]: E0122 00:45:14.326968 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:14.327327 kubelet[3924]: E0122 00:45:14.327150 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:14.327327 kubelet[3924]: W0122 00:45:14.327158 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:14.327327 kubelet[3924]: E0122 00:45:14.327172 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:14.327471 kubelet[3924]: E0122 00:45:14.327351 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:14.327471 kubelet[3924]: W0122 00:45:14.327358 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:14.327471 kubelet[3924]: E0122 00:45:14.327367 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:14.327836 kubelet[3924]: E0122 00:45:14.327789 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:14.327836 kubelet[3924]: W0122 00:45:14.327801 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:14.327836 kubelet[3924]: E0122 00:45:14.327810 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:14.328013 kubelet[3924]: E0122 00:45:14.327994 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:14.328013 kubelet[3924]: W0122 00:45:14.328004 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:14.328074 kubelet[3924]: E0122 00:45:14.328023 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:14.328321 kubelet[3924]: E0122 00:45:14.328263 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:14.328321 kubelet[3924]: W0122 00:45:14.328272 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:14.328321 kubelet[3924]: E0122 00:45:14.328283 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:14.328422 kubelet[3924]: E0122 00:45:14.328404 3924 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 22 00:45:14.328422 kubelet[3924]: W0122 00:45:14.328409 3924 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 22 00:45:14.328422 kubelet[3924]: E0122 00:45:14.328416 3924 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 22 00:45:14.898580 containerd[2460]: time="2026-01-22T00:45:14.898502352Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:45:14.901237 containerd[2460]: time="2026-01-22T00:45:14.901155197Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 22 00:45:14.904175 containerd[2460]: time="2026-01-22T00:45:14.903959815Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:45:14.907449 containerd[2460]: time="2026-01-22T00:45:14.907404031Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:45:14.907939 containerd[2460]: time="2026-01-22T00:45:14.907759028Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.176153165s" Jan 22 00:45:14.907939 containerd[2460]: time="2026-01-22T00:45:14.907787809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 22 00:45:14.910671 containerd[2460]: time="2026-01-22T00:45:14.910643905Z" level=info msg="CreateContainer within sandbox \"12335fab1516b5cc512a5cce8fcdd5c6c523767d4ad4ccaa067a4f643632cb68\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 22 00:45:14.930763 containerd[2460]: time="2026-01-22T00:45:14.930085800Z" level=info msg="Container 1208e3dbd4adb329ce472e8b7ed112544da4275e368c66b77d1d4da753fcc2a7: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:45:14.949090 containerd[2460]: time="2026-01-22T00:45:14.949059505Z" level=info msg="CreateContainer within sandbox \"12335fab1516b5cc512a5cce8fcdd5c6c523767d4ad4ccaa067a4f643632cb68\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1208e3dbd4adb329ce472e8b7ed112544da4275e368c66b77d1d4da753fcc2a7\"" Jan 22 00:45:14.949771 containerd[2460]: time="2026-01-22T00:45:14.949463822Z" level=info msg="StartContainer for \"1208e3dbd4adb329ce472e8b7ed112544da4275e368c66b77d1d4da753fcc2a7\"" Jan 22 00:45:14.951023 containerd[2460]: time="2026-01-22T00:45:14.950995276Z" level=info msg="connecting to shim 1208e3dbd4adb329ce472e8b7ed112544da4275e368c66b77d1d4da753fcc2a7" address="unix:///run/containerd/s/d9670a4fd688224d4c2a739a472f35939022125da98a87a6e00afbeecc1806c1" protocol=ttrpc version=3 Jan 22 00:45:14.971929 systemd[1]: Started cri-containerd-1208e3dbd4adb329ce472e8b7ed112544da4275e368c66b77d1d4da753fcc2a7.scope - libcontainer container 1208e3dbd4adb329ce472e8b7ed112544da4275e368c66b77d1d4da753fcc2a7. Jan 22 00:45:15.004000 audit: BPF prog-id=189 op=LOAD Jan 22 00:45:15.004000 audit[4580]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4458 pid=4580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:15.004000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132303865336462643461646233323963653437326538623765643131 Jan 22 00:45:15.004000 audit: BPF prog-id=190 op=LOAD Jan 22 00:45:15.004000 audit[4580]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4458 pid=4580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:15.004000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132303865336462643461646233323963653437326538623765643131 Jan 22 00:45:15.004000 audit: BPF prog-id=190 op=UNLOAD Jan 22 00:45:15.004000 audit[4580]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4458 pid=4580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:15.004000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132303865336462643461646233323963653437326538623765643131 Jan 22 00:45:15.004000 audit: BPF prog-id=189 op=UNLOAD Jan 22 00:45:15.004000 audit[4580]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4458 pid=4580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:15.004000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132303865336462643461646233323963653437326538623765643131 Jan 22 00:45:15.004000 audit: BPF prog-id=191 op=LOAD Jan 22 00:45:15.004000 audit[4580]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4458 pid=4580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:15.004000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132303865336462643461646233323963653437326538623765643131 Jan 22 00:45:15.024340 containerd[2460]: time="2026-01-22T00:45:15.024309748Z" level=info msg="StartContainer for \"1208e3dbd4adb329ce472e8b7ed112544da4275e368c66b77d1d4da753fcc2a7\" returns successfully" Jan 22 00:45:15.029051 systemd[1]: cri-containerd-1208e3dbd4adb329ce472e8b7ed112544da4275e368c66b77d1d4da753fcc2a7.scope: Deactivated successfully. Jan 22 00:45:15.032636 containerd[2460]: time="2026-01-22T00:45:15.032612269Z" level=info msg="received container exit event container_id:\"1208e3dbd4adb329ce472e8b7ed112544da4275e368c66b77d1d4da753fcc2a7\" id:\"1208e3dbd4adb329ce472e8b7ed112544da4275e368c66b77d1d4da753fcc2a7\" pid:4593 exited_at:{seconds:1769042715 nanos:32251546}" Jan 22 00:45:15.032000 audit: BPF prog-id=191 op=UNLOAD Jan 22 00:45:15.050187 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1208e3dbd4adb329ce472e8b7ed112544da4275e368c66b77d1d4da753fcc2a7-rootfs.mount: Deactivated successfully. Jan 22 00:45:15.268157 kubelet[3924]: E0122 00:45:15.184135 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-78h9h" podUID="222ac10e-a19c-48d7-ba2a-f1cdbf34cf86" Jan 22 00:45:15.272260 kubelet[3924]: I0122 00:45:15.271891 3924 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 00:45:17.184388 kubelet[3924]: E0122 00:45:17.184339 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-78h9h" podUID="222ac10e-a19c-48d7-ba2a-f1cdbf34cf86" Jan 22 00:45:17.278326 containerd[2460]: time="2026-01-22T00:45:17.278270643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 22 00:45:17.457105 kubelet[3924]: I0122 00:45:17.456644 3924 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 00:45:17.487000 audit[4631]: NETFILTER_CFG table=filter:120 family=2 entries=21 op=nft_register_rule pid=4631 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:45:17.489135 kernel: kauditd_printk_skb: 78 callbacks suppressed Jan 22 00:45:17.489197 kernel: audit: type=1325 audit(1769042717.487:576): table=filter:120 family=2 entries=21 op=nft_register_rule pid=4631 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:45:17.487000 audit[4631]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc8fcf0730 a2=0 a3=7ffc8fcf071c items=0 ppid=4073 pid=4631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:17.487000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:45:17.505711 kernel: audit: type=1300 audit(1769042717.487:576): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc8fcf0730 a2=0 a3=7ffc8fcf071c items=0 ppid=4073 pid=4631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:17.505827 kernel: audit: type=1327 audit(1769042717.487:576): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:45:17.495000 audit[4631]: NETFILTER_CFG table=nat:121 family=2 entries=19 op=nft_register_chain pid=4631 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:45:17.495000 audit[4631]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc8fcf0730 a2=0 a3=7ffc8fcf071c items=0 ppid=4073 pid=4631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:17.514256 kernel: audit: type=1325 audit(1769042717.495:577): table=nat:121 family=2 entries=19 op=nft_register_chain pid=4631 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:45:17.514317 kernel: audit: type=1300 audit(1769042717.495:577): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc8fcf0730 a2=0 a3=7ffc8fcf071c items=0 ppid=4073 pid=4631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:17.495000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:45:17.517683 kernel: audit: type=1327 audit(1769042717.495:577): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:45:19.185775 kubelet[3924]: E0122 00:45:19.184901 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-78h9h" podUID="222ac10e-a19c-48d7-ba2a-f1cdbf34cf86" Jan 22 00:45:20.715095 containerd[2460]: time="2026-01-22T00:45:20.715044801Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:45:20.717438 containerd[2460]: time="2026-01-22T00:45:20.717399465Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 22 00:45:20.720368 containerd[2460]: time="2026-01-22T00:45:20.720324668Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:45:20.724397 containerd[2460]: time="2026-01-22T00:45:20.724347399Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:45:20.725156 containerd[2460]: time="2026-01-22T00:45:20.724838129Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.446529769s" Jan 22 00:45:20.725156 containerd[2460]: time="2026-01-22T00:45:20.724868294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 22 00:45:20.727610 containerd[2460]: time="2026-01-22T00:45:20.727583517Z" level=info msg="CreateContainer within sandbox \"12335fab1516b5cc512a5cce8fcdd5c6c523767d4ad4ccaa067a4f643632cb68\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 22 00:45:20.745939 containerd[2460]: time="2026-01-22T00:45:20.745912687Z" level=info msg="Container f3109b87c8f9556dd470d24047dc74efb51592004c645e6a5b9b2e950c916f48: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:45:20.750801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2436118941.mount: Deactivated successfully. Jan 22 00:45:20.763434 containerd[2460]: time="2026-01-22T00:45:20.763392852Z" level=info msg="CreateContainer within sandbox \"12335fab1516b5cc512a5cce8fcdd5c6c523767d4ad4ccaa067a4f643632cb68\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f3109b87c8f9556dd470d24047dc74efb51592004c645e6a5b9b2e950c916f48\"" Jan 22 00:45:20.764423 containerd[2460]: time="2026-01-22T00:45:20.764309053Z" level=info msg="StartContainer for \"f3109b87c8f9556dd470d24047dc74efb51592004c645e6a5b9b2e950c916f48\"" Jan 22 00:45:20.766590 containerd[2460]: time="2026-01-22T00:45:20.766565434Z" level=info msg="connecting to shim f3109b87c8f9556dd470d24047dc74efb51592004c645e6a5b9b2e950c916f48" address="unix:///run/containerd/s/d9670a4fd688224d4c2a739a472f35939022125da98a87a6e00afbeecc1806c1" protocol=ttrpc version=3 Jan 22 00:45:20.790910 systemd[1]: Started cri-containerd-f3109b87c8f9556dd470d24047dc74efb51592004c645e6a5b9b2e950c916f48.scope - libcontainer container f3109b87c8f9556dd470d24047dc74efb51592004c645e6a5b9b2e950c916f48. Jan 22 00:45:20.823000 audit: BPF prog-id=192 op=LOAD Jan 22 00:45:20.823000 audit[4640]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4458 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:20.830445 kernel: audit: type=1334 audit(1769042720.823:578): prog-id=192 op=LOAD Jan 22 00:45:20.830519 kernel: audit: type=1300 audit(1769042720.823:578): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4458 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:20.835253 kernel: audit: type=1327 audit(1769042720.823:578): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633313039623837633866393535366464343730643234303437646337 Jan 22 00:45:20.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633313039623837633866393535366464343730643234303437646337 Jan 22 00:45:20.837788 kernel: audit: type=1334 audit(1769042720.823:579): prog-id=193 op=LOAD Jan 22 00:45:20.823000 audit: BPF prog-id=193 op=LOAD Jan 22 00:45:20.823000 audit[4640]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4458 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:20.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633313039623837633866393535366464343730643234303437646337 Jan 22 00:45:20.823000 audit: BPF prog-id=193 op=UNLOAD Jan 22 00:45:20.823000 audit[4640]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4458 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:20.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633313039623837633866393535366464343730643234303437646337 Jan 22 00:45:20.823000 audit: BPF prog-id=192 op=UNLOAD Jan 22 00:45:20.823000 audit[4640]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4458 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:20.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633313039623837633866393535366464343730643234303437646337 Jan 22 00:45:20.823000 audit: BPF prog-id=194 op=LOAD Jan 22 00:45:20.823000 audit[4640]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4458 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:20.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633313039623837633866393535366464343730643234303437646337 Jan 22 00:45:20.861383 containerd[2460]: time="2026-01-22T00:45:20.861357650Z" level=info msg="StartContainer for \"f3109b87c8f9556dd470d24047dc74efb51592004c645e6a5b9b2e950c916f48\" returns successfully" Jan 22 00:45:21.185767 kubelet[3924]: E0122 00:45:21.185131 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-78h9h" podUID="222ac10e-a19c-48d7-ba2a-f1cdbf34cf86" Jan 22 00:45:22.001106 containerd[2460]: time="2026-01-22T00:45:22.001058821Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 22 00:45:22.003222 systemd[1]: cri-containerd-f3109b87c8f9556dd470d24047dc74efb51592004c645e6a5b9b2e950c916f48.scope: Deactivated successfully. Jan 22 00:45:22.003536 systemd[1]: cri-containerd-f3109b87c8f9556dd470d24047dc74efb51592004c645e6a5b9b2e950c916f48.scope: Consumed 429ms CPU time, 194.3M memory peak, 171.3M written to disk. Jan 22 00:45:22.004961 containerd[2460]: time="2026-01-22T00:45:22.004917488Z" level=info msg="received container exit event container_id:\"f3109b87c8f9556dd470d24047dc74efb51592004c645e6a5b9b2e950c916f48\" id:\"f3109b87c8f9556dd470d24047dc74efb51592004c645e6a5b9b2e950c916f48\" pid:4653 exited_at:{seconds:1769042722 nanos:4694555}" Jan 22 00:45:22.006000 audit: BPF prog-id=194 op=UNLOAD Jan 22 00:45:22.022434 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3109b87c8f9556dd470d24047dc74efb51592004c645e6a5b9b2e950c916f48-rootfs.mount: Deactivated successfully. Jan 22 00:45:22.100754 kubelet[3924]: I0122 00:45:22.100703 3924 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 22 00:45:22.138025 systemd[1]: Created slice kubepods-burstable-pod0681ac92_d1cc_4472_a4c1_25459fbeba8f.slice - libcontainer container kubepods-burstable-pod0681ac92_d1cc_4472_a4c1_25459fbeba8f.slice. Jan 22 00:45:22.163541 systemd[1]: Created slice kubepods-burstable-pod9f7bd783_d45a_483f_ab06_4c0a65b60ba1.slice - libcontainer container kubepods-burstable-pod9f7bd783_d45a_483f_ab06_4c0a65b60ba1.slice. Jan 22 00:45:22.171059 kubelet[3924]: I0122 00:45:22.171022 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c96102f-5e79-4a6e-9dde-550f505c5961-tigera-ca-bundle\") pod \"calico-kube-controllers-5555c47f4f-szgpn\" (UID: \"1c96102f-5e79-4a6e-9dde-550f505c5961\") " pod="calico-system/calico-kube-controllers-5555c47f4f-szgpn" Jan 22 00:45:22.171059 kubelet[3924]: I0122 00:45:22.171056 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f7bd783-d45a-483f-ab06-4c0a65b60ba1-config-volume\") pod \"coredns-668d6bf9bc-ctkzj\" (UID: \"9f7bd783-d45a-483f-ab06-4c0a65b60ba1\") " pod="kube-system/coredns-668d6bf9bc-ctkzj" Jan 22 00:45:22.171196 kubelet[3924]: I0122 00:45:22.171075 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tnjh\" (UniqueName: \"kubernetes.io/projected/9f7bd783-d45a-483f-ab06-4c0a65b60ba1-kube-api-access-6tnjh\") pod \"coredns-668d6bf9bc-ctkzj\" (UID: \"9f7bd783-d45a-483f-ab06-4c0a65b60ba1\") " pod="kube-system/coredns-668d6bf9bc-ctkzj" Jan 22 00:45:22.171196 kubelet[3924]: I0122 00:45:22.171092 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1e056b93-8ad4-41ba-8df7-4e9107c9c36c-whisker-backend-key-pair\") pod \"whisker-58cbc989d5-xxjxb\" (UID: \"1e056b93-8ad4-41ba-8df7-4e9107c9c36c\") " pod="calico-system/whisker-58cbc989d5-xxjxb" Jan 22 00:45:22.171196 kubelet[3924]: I0122 00:45:22.171111 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0681ac92-d1cc-4472-a4c1-25459fbeba8f-config-volume\") pod \"coredns-668d6bf9bc-7ms67\" (UID: \"0681ac92-d1cc-4472-a4c1-25459fbeba8f\") " pod="kube-system/coredns-668d6bf9bc-7ms67" Jan 22 00:45:22.171196 kubelet[3924]: I0122 00:45:22.171129 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-799j2\" (UniqueName: \"kubernetes.io/projected/0681ac92-d1cc-4472-a4c1-25459fbeba8f-kube-api-access-799j2\") pod \"coredns-668d6bf9bc-7ms67\" (UID: \"0681ac92-d1cc-4472-a4c1-25459fbeba8f\") " pod="kube-system/coredns-668d6bf9bc-7ms67" Jan 22 00:45:22.171196 kubelet[3924]: I0122 00:45:22.171147 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/77488da9-2016-4a7d-b29a-dee9cf79fa65-calico-apiserver-certs\") pod \"calico-apiserver-68764f557f-r6nmq\" (UID: \"77488da9-2016-4a7d-b29a-dee9cf79fa65\") " pod="calico-apiserver/calico-apiserver-68764f557f-r6nmq" Jan 22 00:45:22.171324 kubelet[3924]: I0122 00:45:22.171166 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e056b93-8ad4-41ba-8df7-4e9107c9c36c-whisker-ca-bundle\") pod \"whisker-58cbc989d5-xxjxb\" (UID: \"1e056b93-8ad4-41ba-8df7-4e9107c9c36c\") " pod="calico-system/whisker-58cbc989d5-xxjxb" Jan 22 00:45:22.171324 kubelet[3924]: I0122 00:45:22.171188 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60deceea-d34a-4dce-b9f3-936e34e45689-config\") pod \"goldmane-666569f655-pmkws\" (UID: \"60deceea-d34a-4dce-b9f3-936e34e45689\") " pod="calico-system/goldmane-666569f655-pmkws" Jan 22 00:45:22.171324 kubelet[3924]: I0122 00:45:22.171207 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60deceea-d34a-4dce-b9f3-936e34e45689-goldmane-ca-bundle\") pod \"goldmane-666569f655-pmkws\" (UID: \"60deceea-d34a-4dce-b9f3-936e34e45689\") " pod="calico-system/goldmane-666569f655-pmkws" Jan 22 00:45:22.171324 kubelet[3924]: I0122 00:45:22.171225 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/60deceea-d34a-4dce-b9f3-936e34e45689-goldmane-key-pair\") pod \"goldmane-666569f655-pmkws\" (UID: \"60deceea-d34a-4dce-b9f3-936e34e45689\") " pod="calico-system/goldmane-666569f655-pmkws" Jan 22 00:45:22.171324 kubelet[3924]: I0122 00:45:22.171245 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98q7m\" (UniqueName: \"kubernetes.io/projected/60deceea-d34a-4dce-b9f3-936e34e45689-kube-api-access-98q7m\") pod \"goldmane-666569f655-pmkws\" (UID: \"60deceea-d34a-4dce-b9f3-936e34e45689\") " pod="calico-system/goldmane-666569f655-pmkws" Jan 22 00:45:22.171448 kubelet[3924]: I0122 00:45:22.171262 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h22j\" (UniqueName: \"kubernetes.io/projected/1c96102f-5e79-4a6e-9dde-550f505c5961-kube-api-access-7h22j\") pod \"calico-kube-controllers-5555c47f4f-szgpn\" (UID: \"1c96102f-5e79-4a6e-9dde-550f505c5961\") " pod="calico-system/calico-kube-controllers-5555c47f4f-szgpn" Jan 22 00:45:22.171448 kubelet[3924]: I0122 00:45:22.171284 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz7bc\" (UniqueName: \"kubernetes.io/projected/1e056b93-8ad4-41ba-8df7-4e9107c9c36c-kube-api-access-dz7bc\") pod \"whisker-58cbc989d5-xxjxb\" (UID: \"1e056b93-8ad4-41ba-8df7-4e9107c9c36c\") " pod="calico-system/whisker-58cbc989d5-xxjxb" Jan 22 00:45:22.171448 kubelet[3924]: I0122 00:45:22.171302 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsqnq\" (UniqueName: \"kubernetes.io/projected/77488da9-2016-4a7d-b29a-dee9cf79fa65-kube-api-access-tsqnq\") pod \"calico-apiserver-68764f557f-r6nmq\" (UID: \"77488da9-2016-4a7d-b29a-dee9cf79fa65\") " pod="calico-apiserver/calico-apiserver-68764f557f-r6nmq" Jan 22 00:45:22.178629 systemd[1]: Created slice kubepods-besteffort-pod60deceea_d34a_4dce_b9f3_936e34e45689.slice - libcontainer container kubepods-besteffort-pod60deceea_d34a_4dce_b9f3_936e34e45689.slice. Jan 22 00:45:22.186581 systemd[1]: Created slice kubepods-besteffort-pod1c96102f_5e79_4a6e_9dde_550f505c5961.slice - libcontainer container kubepods-besteffort-pod1c96102f_5e79_4a6e_9dde_550f505c5961.slice. Jan 22 00:45:22.193376 systemd[1]: Created slice kubepods-besteffort-pod77488da9_2016_4a7d_b29a_dee9cf79fa65.slice - libcontainer container kubepods-besteffort-pod77488da9_2016_4a7d_b29a_dee9cf79fa65.slice. Jan 22 00:45:22.197758 systemd[1]: Created slice kubepods-besteffort-pod1e056b93_8ad4_41ba_8df7_4e9107c9c36c.slice - libcontainer container kubepods-besteffort-pod1e056b93_8ad4_41ba_8df7_4e9107c9c36c.slice. Jan 22 00:45:22.204824 systemd[1]: Created slice kubepods-besteffort-poddd066ec6_7b8c_4975_b067_940020b582cf.slice - libcontainer container kubepods-besteffort-poddd066ec6_7b8c_4975_b067_940020b582cf.slice. Jan 22 00:45:22.271902 kubelet[3924]: I0122 00:45:22.271725 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8jng\" (UniqueName: \"kubernetes.io/projected/dd066ec6-7b8c-4975-b067-940020b582cf-kube-api-access-f8jng\") pod \"calico-apiserver-68764f557f-l8tpm\" (UID: \"dd066ec6-7b8c-4975-b067-940020b582cf\") " pod="calico-apiserver/calico-apiserver-68764f557f-l8tpm" Jan 22 00:45:22.271902 kubelet[3924]: I0122 00:45:22.271847 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dd066ec6-7b8c-4975-b067-940020b582cf-calico-apiserver-certs\") pod \"calico-apiserver-68764f557f-l8tpm\" (UID: \"dd066ec6-7b8c-4975-b067-940020b582cf\") " pod="calico-apiserver/calico-apiserver-68764f557f-l8tpm" Jan 22 00:45:22.554800 containerd[2460]: time="2026-01-22T00:45:22.554426768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-pmkws,Uid:60deceea-d34a-4dce-b9f3-936e34e45689,Namespace:calico-system,Attempt:0,}" Jan 22 00:45:22.554800 containerd[2460]: time="2026-01-22T00:45:22.554460169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58cbc989d5-xxjxb,Uid:1e056b93-8ad4-41ba-8df7-4e9107c9c36c,Namespace:calico-system,Attempt:0,}" Jan 22 00:45:22.554800 containerd[2460]: time="2026-01-22T00:45:22.554426806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68764f557f-r6nmq,Uid:77488da9-2016-4a7d-b29a-dee9cf79fa65,Namespace:calico-apiserver,Attempt:0,}" Jan 22 00:45:22.554800 containerd[2460]: time="2026-01-22T00:45:22.554679100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5555c47f4f-szgpn,Uid:1c96102f-5e79-4a6e-9dde-550f505c5961,Namespace:calico-system,Attempt:0,}" Jan 22 00:45:22.745203 containerd[2460]: time="2026-01-22T00:45:22.745165431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7ms67,Uid:0681ac92-d1cc-4472-a4c1-25459fbeba8f,Namespace:kube-system,Attempt:0,}" Jan 22 00:45:22.775940 containerd[2460]: time="2026-01-22T00:45:22.775900560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ctkzj,Uid:9f7bd783-d45a-483f-ab06-4c0a65b60ba1,Namespace:kube-system,Attempt:0,}" Jan 22 00:45:22.809644 containerd[2460]: time="2026-01-22T00:45:22.809555725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68764f557f-l8tpm,Uid:dd066ec6-7b8c-4975-b067-940020b582cf,Namespace:calico-apiserver,Attempt:0,}" Jan 22 00:45:23.002179 containerd[2460]: time="2026-01-22T00:45:23.002115170Z" level=error msg="Failed to destroy network for sandbox \"de35212d9adaa8b264c52e9e41b3e103a0a03ac927ef175a890b72328adc3df9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:45:23.016069 containerd[2460]: time="2026-01-22T00:45:23.016023579Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58cbc989d5-xxjxb,Uid:1e056b93-8ad4-41ba-8df7-4e9107c9c36c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"de35212d9adaa8b264c52e9e41b3e103a0a03ac927ef175a890b72328adc3df9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:45:23.017021 kubelet[3924]: E0122 00:45:23.016249 3924 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de35212d9adaa8b264c52e9e41b3e103a0a03ac927ef175a890b72328adc3df9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:45:23.017021 kubelet[3924]: E0122 00:45:23.016335 3924 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de35212d9adaa8b264c52e9e41b3e103a0a03ac927ef175a890b72328adc3df9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-58cbc989d5-xxjxb" Jan 22 00:45:23.017021 kubelet[3924]: E0122 00:45:23.016362 3924 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de35212d9adaa8b264c52e9e41b3e103a0a03ac927ef175a890b72328adc3df9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-58cbc989d5-xxjxb" Jan 22 00:45:23.017157 kubelet[3924]: E0122 00:45:23.016411 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-58cbc989d5-xxjxb_calico-system(1e056b93-8ad4-41ba-8df7-4e9107c9c36c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-58cbc989d5-xxjxb_calico-system(1e056b93-8ad4-41ba-8df7-4e9107c9c36c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de35212d9adaa8b264c52e9e41b3e103a0a03ac927ef175a890b72328adc3df9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-58cbc989d5-xxjxb" podUID="1e056b93-8ad4-41ba-8df7-4e9107c9c36c" Jan 22 00:45:23.115181 containerd[2460]: time="2026-01-22T00:45:23.115075365Z" level=error msg="Failed to destroy network for sandbox \"bbc712d22d2bb3a7c8bc8357c9f71dd8fb6d96e3104db1275c45812ed0b94fef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:45:23.117681 systemd[1]: run-netns-cni\x2dc742f098\x2dd69a\x2dc9f0\x2dcef4\x2d70ac4358c628.mount: Deactivated successfully. Jan 22 00:45:23.125985 containerd[2460]: time="2026-01-22T00:45:23.125745263Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-pmkws,Uid:60deceea-d34a-4dce-b9f3-936e34e45689,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbc712d22d2bb3a7c8bc8357c9f71dd8fb6d96e3104db1275c45812ed0b94fef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:45:23.127147 kubelet[3924]: E0122 00:45:23.126992 3924 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbc712d22d2bb3a7c8bc8357c9f71dd8fb6d96e3104db1275c45812ed0b94fef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:45:23.127147 kubelet[3924]: E0122 00:45:23.127073 3924 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbc712d22d2bb3a7c8bc8357c9f71dd8fb6d96e3104db1275c45812ed0b94fef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-pmkws" Jan 22 00:45:23.127147 kubelet[3924]: E0122 00:45:23.127098 3924 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbc712d22d2bb3a7c8bc8357c9f71dd8fb6d96e3104db1275c45812ed0b94fef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-pmkws" Jan 22 00:45:23.127314 kubelet[3924]: E0122 00:45:23.127161 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-pmkws_calico-system(60deceea-d34a-4dce-b9f3-936e34e45689)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-pmkws_calico-system(60deceea-d34a-4dce-b9f3-936e34e45689)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bbc712d22d2bb3a7c8bc8357c9f71dd8fb6d96e3104db1275c45812ed0b94fef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-pmkws" podUID="60deceea-d34a-4dce-b9f3-936e34e45689" Jan 22 00:45:23.139607 containerd[2460]: time="2026-01-22T00:45:23.139493524Z" level=error msg="Failed to destroy network for sandbox \"b77378c98fbcd4f9d551674ffe72f94d736431b0459982f477df868bea4e073a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:45:23.143940 systemd[1]: run-netns-cni\x2d607a028c\x2d2b78\x2ddfa3\x2dcad0\x2d39c4d54a0473.mount: Deactivated successfully. Jan 22 00:45:23.146397 containerd[2460]: time="2026-01-22T00:45:23.146363248Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ctkzj,Uid:9f7bd783-d45a-483f-ab06-4c0a65b60ba1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b77378c98fbcd4f9d551674ffe72f94d736431b0459982f477df868bea4e073a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:45:23.146520 containerd[2460]: time="2026-01-22T00:45:23.146471861Z" level=error msg="Failed to destroy network for sandbox \"b6903dc8f0a16039322e57a1474d211149b79d0833fb37901e90507ee18a7bb8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:45:23.146892 kubelet[3924]: E0122 00:45:23.146865 3924 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b77378c98fbcd4f9d551674ffe72f94d736431b0459982f477df868bea4e073a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:45:23.146940 kubelet[3924]: E0122 00:45:23.146916 3924 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b77378c98fbcd4f9d551674ffe72f94d736431b0459982f477df868bea4e073a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ctkzj" Jan 22 00:45:23.147238 kubelet[3924]: E0122 00:45:23.146935 3924 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b77378c98fbcd4f9d551674ffe72f94d736431b0459982f477df868bea4e073a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ctkzj" Jan 22 00:45:23.147845 kubelet[3924]: E0122 00:45:23.147814 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-ctkzj_kube-system(9f7bd783-d45a-483f-ab06-4c0a65b60ba1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-ctkzj_kube-system(9f7bd783-d45a-483f-ab06-4c0a65b60ba1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b77378c98fbcd4f9d551674ffe72f94d736431b0459982f477df868bea4e073a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ctkzj" podUID="9f7bd783-d45a-483f-ab06-4c0a65b60ba1" Jan 22 00:45:23.149989 systemd[1]: run-netns-cni\x2dee066a43\x2d94e3\x2ddfe2\x2d565d\x2d6134360020ae.mount: Deactivated successfully. Jan 22 00:45:23.156358 containerd[2460]: time="2026-01-22T00:45:23.156321450Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5555c47f4f-szgpn,Uid:1c96102f-5e79-4a6e-9dde-550f505c5961,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6903dc8f0a16039322e57a1474d211149b79d0833fb37901e90507ee18a7bb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:45:23.157057 kubelet[3924]: E0122 00:45:23.156492 3924 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6903dc8f0a16039322e57a1474d211149b79d0833fb37901e90507ee18a7bb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:45:23.157057 kubelet[3924]: E0122 00:45:23.156545 3924 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6903dc8f0a16039322e57a1474d211149b79d0833fb37901e90507ee18a7bb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5555c47f4f-szgpn" Jan 22 00:45:23.157057 kubelet[3924]: E0122 00:45:23.156565 3924 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6903dc8f0a16039322e57a1474d211149b79d0833fb37901e90507ee18a7bb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5555c47f4f-szgpn" Jan 22 00:45:23.157168 kubelet[3924]: E0122 00:45:23.156612 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5555c47f4f-szgpn_calico-system(1c96102f-5e79-4a6e-9dde-550f505c5961)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5555c47f4f-szgpn_calico-system(1c96102f-5e79-4a6e-9dde-550f505c5961)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6903dc8f0a16039322e57a1474d211149b79d0833fb37901e90507ee18a7bb8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5555c47f4f-szgpn" podUID="1c96102f-5e79-4a6e-9dde-550f505c5961" Jan 22 00:45:23.161099 containerd[2460]: time="2026-01-22T00:45:23.160208003Z" level=error msg="Failed to destroy network for sandbox \"fda1ecdf6fb1a5baebaf434421d7157645fc0a783b9c93e374943d96ae108d8c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:45:23.163038 systemd[1]: run-netns-cni\x2dcc001052\x2d7386\x2d9899\x2dd22d\x2d6db5c3600233.mount: Deactivated successfully. Jan 22 00:45:23.167642 containerd[2460]: time="2026-01-22T00:45:23.167601528Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68764f557f-r6nmq,Uid:77488da9-2016-4a7d-b29a-dee9cf79fa65,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fda1ecdf6fb1a5baebaf434421d7157645fc0a783b9c93e374943d96ae108d8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:45:23.167818 kubelet[3924]: E0122 00:45:23.167785 3924 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fda1ecdf6fb1a5baebaf434421d7157645fc0a783b9c93e374943d96ae108d8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:45:23.167972 kubelet[3924]: E0122 00:45:23.167825 3924 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fda1ecdf6fb1a5baebaf434421d7157645fc0a783b9c93e374943d96ae108d8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68764f557f-r6nmq" Jan 22 00:45:23.167972 kubelet[3924]: E0122 00:45:23.167846 3924 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fda1ecdf6fb1a5baebaf434421d7157645fc0a783b9c93e374943d96ae108d8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68764f557f-r6nmq" Jan 22 00:45:23.167972 kubelet[3924]: E0122 00:45:23.167884 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68764f557f-r6nmq_calico-apiserver(77488da9-2016-4a7d-b29a-dee9cf79fa65)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68764f557f-r6nmq_calico-apiserver(77488da9-2016-4a7d-b29a-dee9cf79fa65)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fda1ecdf6fb1a5baebaf434421d7157645fc0a783b9c93e374943d96ae108d8c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68764f557f-r6nmq" podUID="77488da9-2016-4a7d-b29a-dee9cf79fa65" Jan 22 00:45:23.171780 containerd[2460]: time="2026-01-22T00:45:23.171484914Z" level=error msg="Failed to destroy network for sandbox \"ddc876e3f4ec7f8c286e7eb3359bdfe9aa9fe1df4a37e989e37a2c4b13cc7709\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:45:23.172237 containerd[2460]: time="2026-01-22T00:45:23.172209761Z" level=error msg="Failed to destroy network for sandbox \"ef1f9b9e2a402d8e9ca0b0b82247862910b1cb22768c1db111040731fa753c99\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:45:23.181470 containerd[2460]: time="2026-01-22T00:45:23.181373871Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68764f557f-l8tpm,Uid:dd066ec6-7b8c-4975-b067-940020b582cf,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef1f9b9e2a402d8e9ca0b0b82247862910b1cb22768c1db111040731fa753c99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:45:23.181772 kubelet[3924]: E0122 00:45:23.181731 3924 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef1f9b9e2a402d8e9ca0b0b82247862910b1cb22768c1db111040731fa753c99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:45:23.181839 kubelet[3924]: E0122 00:45:23.181790 3924 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef1f9b9e2a402d8e9ca0b0b82247862910b1cb22768c1db111040731fa753c99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68764f557f-l8tpm" Jan 22 00:45:23.181839 kubelet[3924]: E0122 00:45:23.181807 3924 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef1f9b9e2a402d8e9ca0b0b82247862910b1cb22768c1db111040731fa753c99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68764f557f-l8tpm" Jan 22 00:45:23.181893 kubelet[3924]: E0122 00:45:23.181851 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68764f557f-l8tpm_calico-apiserver(dd066ec6-7b8c-4975-b067-940020b582cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68764f557f-l8tpm_calico-apiserver(dd066ec6-7b8c-4975-b067-940020b582cf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef1f9b9e2a402d8e9ca0b0b82247862910b1cb22768c1db111040731fa753c99\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68764f557f-l8tpm" podUID="dd066ec6-7b8c-4975-b067-940020b582cf" Jan 22 00:45:23.185006 containerd[2460]: time="2026-01-22T00:45:23.184586226Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7ms67,Uid:0681ac92-d1cc-4472-a4c1-25459fbeba8f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddc876e3f4ec7f8c286e7eb3359bdfe9aa9fe1df4a37e989e37a2c4b13cc7709\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:45:23.185107 kubelet[3924]: E0122 00:45:23.185056 3924 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddc876e3f4ec7f8c286e7eb3359bdfe9aa9fe1df4a37e989e37a2c4b13cc7709\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:45:23.185107 kubelet[3924]: E0122 00:45:23.185092 3924 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddc876e3f4ec7f8c286e7eb3359bdfe9aa9fe1df4a37e989e37a2c4b13cc7709\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7ms67" Jan 22 00:45:23.185167 kubelet[3924]: E0122 00:45:23.185111 3924 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddc876e3f4ec7f8c286e7eb3359bdfe9aa9fe1df4a37e989e37a2c4b13cc7709\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7ms67" Jan 22 00:45:23.185195 kubelet[3924]: E0122 00:45:23.185163 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7ms67_kube-system(0681ac92-d1cc-4472-a4c1-25459fbeba8f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7ms67_kube-system(0681ac92-d1cc-4472-a4c1-25459fbeba8f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ddc876e3f4ec7f8c286e7eb3359bdfe9aa9fe1df4a37e989e37a2c4b13cc7709\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7ms67" podUID="0681ac92-d1cc-4472-a4c1-25459fbeba8f" Jan 22 00:45:23.190155 systemd[1]: Created slice kubepods-besteffort-pod222ac10e_a19c_48d7_ba2a_f1cdbf34cf86.slice - libcontainer container kubepods-besteffort-pod222ac10e_a19c_48d7_ba2a_f1cdbf34cf86.slice. Jan 22 00:45:23.192416 containerd[2460]: time="2026-01-22T00:45:23.192383833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-78h9h,Uid:222ac10e-a19c-48d7-ba2a-f1cdbf34cf86,Namespace:calico-system,Attempt:0,}" Jan 22 00:45:23.243675 containerd[2460]: time="2026-01-22T00:45:23.243630755Z" level=error msg="Failed to destroy network for sandbox \"ebe428f26c75e7e0ada8f35fecbe82712c5efc1473957aa5308c39d8ab95c378\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:45:23.248779 containerd[2460]: time="2026-01-22T00:45:23.248725967Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-78h9h,Uid:222ac10e-a19c-48d7-ba2a-f1cdbf34cf86,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebe428f26c75e7e0ada8f35fecbe82712c5efc1473957aa5308c39d8ab95c378\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:45:23.249008 kubelet[3924]: E0122 00:45:23.248980 3924 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebe428f26c75e7e0ada8f35fecbe82712c5efc1473957aa5308c39d8ab95c378\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 22 00:45:23.249075 kubelet[3924]: E0122 00:45:23.249033 3924 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebe428f26c75e7e0ada8f35fecbe82712c5efc1473957aa5308c39d8ab95c378\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-78h9h" Jan 22 00:45:23.249075 kubelet[3924]: E0122 00:45:23.249055 3924 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebe428f26c75e7e0ada8f35fecbe82712c5efc1473957aa5308c39d8ab95c378\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-78h9h" Jan 22 00:45:23.249136 kubelet[3924]: E0122 00:45:23.249096 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-78h9h_calico-system(222ac10e-a19c-48d7-ba2a-f1cdbf34cf86)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-78h9h_calico-system(222ac10e-a19c-48d7-ba2a-f1cdbf34cf86)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ebe428f26c75e7e0ada8f35fecbe82712c5efc1473957aa5308c39d8ab95c378\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-78h9h" podUID="222ac10e-a19c-48d7-ba2a-f1cdbf34cf86" Jan 22 00:45:23.300634 containerd[2460]: time="2026-01-22T00:45:23.300597729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 22 00:45:24.024426 systemd[1]: run-netns-cni\x2de0c53688\x2d3693\x2d4a74\x2de11b\x2d3e0ff562c27e.mount: Deactivated successfully. Jan 22 00:45:24.024959 systemd[1]: run-netns-cni\x2df57ac081\x2d1307\x2d42b5\x2d2b66\x2d84c5ec60d037.mount: Deactivated successfully. Jan 22 00:45:29.942391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount857588262.mount: Deactivated successfully. Jan 22 00:45:29.974464 containerd[2460]: time="2026-01-22T00:45:29.974418750Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:45:29.978328 containerd[2460]: time="2026-01-22T00:45:29.978294086Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 22 00:45:29.980948 containerd[2460]: time="2026-01-22T00:45:29.980905568Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:45:29.984211 containerd[2460]: time="2026-01-22T00:45:29.984167867Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 22 00:45:29.984705 containerd[2460]: time="2026-01-22T00:45:29.984441668Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.683803347s" Jan 22 00:45:29.984705 containerd[2460]: time="2026-01-22T00:45:29.984473457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 22 00:45:29.995925 containerd[2460]: time="2026-01-22T00:45:29.995697643Z" level=info msg="CreateContainer within sandbox \"12335fab1516b5cc512a5cce8fcdd5c6c523767d4ad4ccaa067a4f643632cb68\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 22 00:45:30.020764 containerd[2460]: time="2026-01-22T00:45:30.019507343Z" level=info msg="Container 80098344aa603e9ecc8cd411c3e8ac7792b530f8a0e1e40bf0b19567aea9b4df: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:45:30.036285 containerd[2460]: time="2026-01-22T00:45:30.036258918Z" level=info msg="CreateContainer within sandbox \"12335fab1516b5cc512a5cce8fcdd5c6c523767d4ad4ccaa067a4f643632cb68\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"80098344aa603e9ecc8cd411c3e8ac7792b530f8a0e1e40bf0b19567aea9b4df\"" Jan 22 00:45:30.037868 containerd[2460]: time="2026-01-22T00:45:30.036798820Z" level=info msg="StartContainer for \"80098344aa603e9ecc8cd411c3e8ac7792b530f8a0e1e40bf0b19567aea9b4df\"" Jan 22 00:45:30.038434 containerd[2460]: time="2026-01-22T00:45:30.038408702Z" level=info msg="connecting to shim 80098344aa603e9ecc8cd411c3e8ac7792b530f8a0e1e40bf0b19567aea9b4df" address="unix:///run/containerd/s/d9670a4fd688224d4c2a739a472f35939022125da98a87a6e00afbeecc1806c1" protocol=ttrpc version=3 Jan 22 00:45:30.056905 systemd[1]: Started cri-containerd-80098344aa603e9ecc8cd411c3e8ac7792b530f8a0e1e40bf0b19567aea9b4df.scope - libcontainer container 80098344aa603e9ecc8cd411c3e8ac7792b530f8a0e1e40bf0b19567aea9b4df. Jan 22 00:45:30.108000 audit: BPF prog-id=195 op=LOAD Jan 22 00:45:30.111504 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 22 00:45:30.111970 kernel: audit: type=1334 audit(1769042730.108:584): prog-id=195 op=LOAD Jan 22 00:45:30.112065 kernel: audit: type=1300 audit(1769042730.108:584): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=4458 pid=4913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:30.108000 audit[4913]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=4458 pid=4913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:30.119465 kernel: audit: type=1327 audit(1769042730.108:584): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830303938333434616136303365396563633863643431316333653861 Jan 22 00:45:30.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830303938333434616136303365396563633863643431316333653861 Jan 22 00:45:30.108000 audit: BPF prog-id=196 op=LOAD Jan 22 00:45:30.121416 kernel: audit: type=1334 audit(1769042730.108:585): prog-id=196 op=LOAD Jan 22 00:45:30.108000 audit[4913]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=4458 pid=4913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:30.125779 kernel: audit: type=1300 audit(1769042730.108:585): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=4458 pid=4913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:30.130526 kernel: audit: type=1327 audit(1769042730.108:585): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830303938333434616136303365396563633863643431316333653861 Jan 22 00:45:30.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830303938333434616136303365396563633863643431316333653861 Jan 22 00:45:30.132112 kernel: audit: type=1334 audit(1769042730.108:586): prog-id=196 op=UNLOAD Jan 22 00:45:30.108000 audit: BPF prog-id=196 op=UNLOAD Jan 22 00:45:30.136624 kernel: audit: type=1300 audit(1769042730.108:586): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4458 pid=4913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:30.108000 audit[4913]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4458 pid=4913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:30.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830303938333434616136303365396563633863643431316333653861 Jan 22 00:45:30.108000 audit: BPF prog-id=195 op=UNLOAD Jan 22 00:45:30.146338 kernel: audit: type=1327 audit(1769042730.108:586): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830303938333434616136303365396563633863643431316333653861 Jan 22 00:45:30.146389 kernel: audit: type=1334 audit(1769042730.108:587): prog-id=195 op=UNLOAD Jan 22 00:45:30.108000 audit[4913]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4458 pid=4913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:30.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830303938333434616136303365396563633863643431316333653861 Jan 22 00:45:30.108000 audit: BPF prog-id=197 op=LOAD Jan 22 00:45:30.108000 audit[4913]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=4458 pid=4913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:30.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830303938333434616136303365396563633863643431316333653861 Jan 22 00:45:30.155895 containerd[2460]: time="2026-01-22T00:45:30.155813293Z" level=info msg="StartContainer for \"80098344aa603e9ecc8cd411c3e8ac7792b530f8a0e1e40bf0b19567aea9b4df\" returns successfully" Jan 22 00:45:30.244138 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 22 00:45:30.244228 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 22 00:45:30.421112 kubelet[3924]: I0122 00:45:30.421072 3924 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dz7bc\" (UniqueName: \"kubernetes.io/projected/1e056b93-8ad4-41ba-8df7-4e9107c9c36c-kube-api-access-dz7bc\") pod \"1e056b93-8ad4-41ba-8df7-4e9107c9c36c\" (UID: \"1e056b93-8ad4-41ba-8df7-4e9107c9c36c\") " Jan 22 00:45:30.421457 kubelet[3924]: I0122 00:45:30.421125 3924 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1e056b93-8ad4-41ba-8df7-4e9107c9c36c-whisker-backend-key-pair\") pod \"1e056b93-8ad4-41ba-8df7-4e9107c9c36c\" (UID: \"1e056b93-8ad4-41ba-8df7-4e9107c9c36c\") " Jan 22 00:45:30.421457 kubelet[3924]: I0122 00:45:30.421144 3924 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e056b93-8ad4-41ba-8df7-4e9107c9c36c-whisker-ca-bundle\") pod \"1e056b93-8ad4-41ba-8df7-4e9107c9c36c\" (UID: \"1e056b93-8ad4-41ba-8df7-4e9107c9c36c\") " Jan 22 00:45:30.424290 kubelet[3924]: I0122 00:45:30.424059 3924 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e056b93-8ad4-41ba-8df7-4e9107c9c36c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "1e056b93-8ad4-41ba-8df7-4e9107c9c36c" (UID: "1e056b93-8ad4-41ba-8df7-4e9107c9c36c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 22 00:45:30.425436 kubelet[3924]: I0122 00:45:30.425403 3924 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e056b93-8ad4-41ba-8df7-4e9107c9c36c-kube-api-access-dz7bc" (OuterVolumeSpecName: "kube-api-access-dz7bc") pod "1e056b93-8ad4-41ba-8df7-4e9107c9c36c" (UID: "1e056b93-8ad4-41ba-8df7-4e9107c9c36c"). InnerVolumeSpecName "kube-api-access-dz7bc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 22 00:45:30.428756 kubelet[3924]: I0122 00:45:30.428704 3924 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e056b93-8ad4-41ba-8df7-4e9107c9c36c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "1e056b93-8ad4-41ba-8df7-4e9107c9c36c" (UID: "1e056b93-8ad4-41ba-8df7-4e9107c9c36c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 22 00:45:30.522925 kubelet[3924]: I0122 00:45:30.522884 3924 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1e056b93-8ad4-41ba-8df7-4e9107c9c36c-whisker-backend-key-pair\") on node \"ci-4515.1.0-n-d879fbfda5\" DevicePath \"\"" Jan 22 00:45:30.522925 kubelet[3924]: I0122 00:45:30.522925 3924 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e056b93-8ad4-41ba-8df7-4e9107c9c36c-whisker-ca-bundle\") on node \"ci-4515.1.0-n-d879fbfda5\" DevicePath \"\"" Jan 22 00:45:30.522925 kubelet[3924]: I0122 00:45:30.522934 3924 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dz7bc\" (UniqueName: \"kubernetes.io/projected/1e056b93-8ad4-41ba-8df7-4e9107c9c36c-kube-api-access-dz7bc\") on node \"ci-4515.1.0-n-d879fbfda5\" DevicePath \"\"" Jan 22 00:45:30.942380 systemd[1]: var-lib-kubelet-pods-1e056b93\x2d8ad4\x2d41ba\x2d8df7\x2d4e9107c9c36c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddz7bc.mount: Deactivated successfully. Jan 22 00:45:30.942673 systemd[1]: var-lib-kubelet-pods-1e056b93\x2d8ad4\x2d41ba\x2d8df7\x2d4e9107c9c36c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 22 00:45:31.189456 systemd[1]: Removed slice kubepods-besteffort-pod1e056b93_8ad4_41ba_8df7_4e9107c9c36c.slice - libcontainer container kubepods-besteffort-pod1e056b93_8ad4_41ba_8df7_4e9107c9c36c.slice. Jan 22 00:45:31.341559 kubelet[3924]: I0122 00:45:31.341366 3924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-nrfzq" podStartSLOduration=1.823642561 podStartE2EDuration="20.341347943s" podCreationTimestamp="2026-01-22 00:45:11 +0000 UTC" firstStartedPulling="2026-01-22 00:45:11.467476035 +0000 UTC m=+20.373709719" lastFinishedPulling="2026-01-22 00:45:29.985181424 +0000 UTC m=+38.891415101" observedRunningTime="2026-01-22 00:45:30.361441755 +0000 UTC m=+39.267675451" watchObservedRunningTime="2026-01-22 00:45:31.341347943 +0000 UTC m=+40.247581738" Jan 22 00:45:31.395588 systemd[1]: Created slice kubepods-besteffort-podc7ca39d8_cd5c_4ad6_a84c_aac52a3306f1.slice - libcontainer container kubepods-besteffort-podc7ca39d8_cd5c_4ad6_a84c_aac52a3306f1.slice. Jan 22 00:45:31.428552 kubelet[3924]: I0122 00:45:31.428501 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7ca39d8-cd5c-4ad6-a84c-aac52a3306f1-whisker-ca-bundle\") pod \"whisker-759c5b6477-kxt5n\" (UID: \"c7ca39d8-cd5c-4ad6-a84c-aac52a3306f1\") " pod="calico-system/whisker-759c5b6477-kxt5n" Jan 22 00:45:31.428552 kubelet[3924]: I0122 00:45:31.428545 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dckdv\" (UniqueName: \"kubernetes.io/projected/c7ca39d8-cd5c-4ad6-a84c-aac52a3306f1-kube-api-access-dckdv\") pod \"whisker-759c5b6477-kxt5n\" (UID: \"c7ca39d8-cd5c-4ad6-a84c-aac52a3306f1\") " pod="calico-system/whisker-759c5b6477-kxt5n" Jan 22 00:45:31.429156 kubelet[3924]: I0122 00:45:31.428575 3924 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c7ca39d8-cd5c-4ad6-a84c-aac52a3306f1-whisker-backend-key-pair\") pod \"whisker-759c5b6477-kxt5n\" (UID: \"c7ca39d8-cd5c-4ad6-a84c-aac52a3306f1\") " pod="calico-system/whisker-759c5b6477-kxt5n" Jan 22 00:45:31.700423 containerd[2460]: time="2026-01-22T00:45:31.699966146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-759c5b6477-kxt5n,Uid:c7ca39d8-cd5c-4ad6-a84c-aac52a3306f1,Namespace:calico-system,Attempt:0,}" Jan 22 00:45:31.849605 systemd-networkd[2242]: calid8c7dc89fcf: Link UP Jan 22 00:45:31.849959 systemd-networkd[2242]: calid8c7dc89fcf: Gained carrier Jan 22 00:45:31.874478 containerd[2460]: 2026-01-22 00:45:31.747 [INFO][5109] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 22 00:45:31.874478 containerd[2460]: 2026-01-22 00:45:31.757 [INFO][5109] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--n--d879fbfda5-k8s-whisker--759c5b6477--kxt5n-eth0 whisker-759c5b6477- calico-system c7ca39d8-cd5c-4ad6-a84c-aac52a3306f1 874 0 2026-01-22 00:45:31 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:759c5b6477 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4515.1.0-n-d879fbfda5 whisker-759c5b6477-kxt5n eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid8c7dc89fcf [] [] }} ContainerID="a8f77287a9493e02a742cdbe43daa552a0b781bc57a64496feb6f515b7cd34c4" Namespace="calico-system" Pod="whisker-759c5b6477-kxt5n" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-whisker--759c5b6477--kxt5n-" Jan 22 00:45:31.874478 containerd[2460]: 2026-01-22 00:45:31.757 [INFO][5109] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a8f77287a9493e02a742cdbe43daa552a0b781bc57a64496feb6f515b7cd34c4" Namespace="calico-system" Pod="whisker-759c5b6477-kxt5n" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-whisker--759c5b6477--kxt5n-eth0" Jan 22 00:45:31.874478 containerd[2460]: 2026-01-22 00:45:31.792 [INFO][5120] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a8f77287a9493e02a742cdbe43daa552a0b781bc57a64496feb6f515b7cd34c4" HandleID="k8s-pod-network.a8f77287a9493e02a742cdbe43daa552a0b781bc57a64496feb6f515b7cd34c4" Workload="ci--4515.1.0--n--d879fbfda5-k8s-whisker--759c5b6477--kxt5n-eth0" Jan 22 00:45:31.875031 containerd[2460]: 2026-01-22 00:45:31.793 [INFO][5120] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a8f77287a9493e02a742cdbe43daa552a0b781bc57a64496feb6f515b7cd34c4" HandleID="k8s-pod-network.a8f77287a9493e02a742cdbe43daa552a0b781bc57a64496feb6f515b7cd34c4" Workload="ci--4515.1.0--n--d879fbfda5-k8s-whisker--759c5b6477--kxt5n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002adf50), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-n-d879fbfda5", "pod":"whisker-759c5b6477-kxt5n", "timestamp":"2026-01-22 00:45:31.792861016 +0000 UTC"}, Hostname:"ci-4515.1.0-n-d879fbfda5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 00:45:31.875031 containerd[2460]: 2026-01-22 00:45:31.793 [INFO][5120] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 00:45:31.875031 containerd[2460]: 2026-01-22 00:45:31.793 [INFO][5120] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 00:45:31.875031 containerd[2460]: 2026-01-22 00:45:31.793 [INFO][5120] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-n-d879fbfda5' Jan 22 00:45:31.875031 containerd[2460]: 2026-01-22 00:45:31.800 [INFO][5120] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a8f77287a9493e02a742cdbe43daa552a0b781bc57a64496feb6f515b7cd34c4" host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:31.875031 containerd[2460]: 2026-01-22 00:45:31.803 [INFO][5120] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:31.875031 containerd[2460]: 2026-01-22 00:45:31.807 [INFO][5120] ipam/ipam.go 511: Trying affinity for 192.168.22.128/26 host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:31.875031 containerd[2460]: 2026-01-22 00:45:31.809 [INFO][5120] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.128/26 host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:31.875031 containerd[2460]: 2026-01-22 00:45:31.811 [INFO][5120] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.128/26 host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:31.875268 containerd[2460]: 2026-01-22 00:45:31.811 [INFO][5120] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.22.128/26 handle="k8s-pod-network.a8f77287a9493e02a742cdbe43daa552a0b781bc57a64496feb6f515b7cd34c4" host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:31.875268 containerd[2460]: 2026-01-22 00:45:31.813 [INFO][5120] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a8f77287a9493e02a742cdbe43daa552a0b781bc57a64496feb6f515b7cd34c4 Jan 22 00:45:31.875268 containerd[2460]: 2026-01-22 00:45:31.821 [INFO][5120] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.22.128/26 handle="k8s-pod-network.a8f77287a9493e02a742cdbe43daa552a0b781bc57a64496feb6f515b7cd34c4" host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:31.875268 containerd[2460]: 2026-01-22 00:45:31.826 [INFO][5120] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.22.129/26] block=192.168.22.128/26 handle="k8s-pod-network.a8f77287a9493e02a742cdbe43daa552a0b781bc57a64496feb6f515b7cd34c4" host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:31.875268 containerd[2460]: 2026-01-22 00:45:31.826 [INFO][5120] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.129/26] handle="k8s-pod-network.a8f77287a9493e02a742cdbe43daa552a0b781bc57a64496feb6f515b7cd34c4" host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:31.875268 containerd[2460]: 2026-01-22 00:45:31.826 [INFO][5120] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 00:45:31.875268 containerd[2460]: 2026-01-22 00:45:31.826 [INFO][5120] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.22.129/26] IPv6=[] ContainerID="a8f77287a9493e02a742cdbe43daa552a0b781bc57a64496feb6f515b7cd34c4" HandleID="k8s-pod-network.a8f77287a9493e02a742cdbe43daa552a0b781bc57a64496feb6f515b7cd34c4" Workload="ci--4515.1.0--n--d879fbfda5-k8s-whisker--759c5b6477--kxt5n-eth0" Jan 22 00:45:31.875433 containerd[2460]: 2026-01-22 00:45:31.830 [INFO][5109] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a8f77287a9493e02a742cdbe43daa552a0b781bc57a64496feb6f515b7cd34c4" Namespace="calico-system" Pod="whisker-759c5b6477-kxt5n" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-whisker--759c5b6477--kxt5n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--d879fbfda5-k8s-whisker--759c5b6477--kxt5n-eth0", GenerateName:"whisker-759c5b6477-", Namespace:"calico-system", SelfLink:"", UID:"c7ca39d8-cd5c-4ad6-a84c-aac52a3306f1", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 45, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"759c5b6477", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-d879fbfda5", ContainerID:"", Pod:"whisker-759c5b6477-kxt5n", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.22.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid8c7dc89fcf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:45:31.875433 containerd[2460]: 2026-01-22 00:45:31.830 [INFO][5109] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.129/32] ContainerID="a8f77287a9493e02a742cdbe43daa552a0b781bc57a64496feb6f515b7cd34c4" Namespace="calico-system" Pod="whisker-759c5b6477-kxt5n" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-whisker--759c5b6477--kxt5n-eth0" Jan 22 00:45:31.875523 containerd[2460]: 2026-01-22 00:45:31.830 [INFO][5109] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid8c7dc89fcf ContainerID="a8f77287a9493e02a742cdbe43daa552a0b781bc57a64496feb6f515b7cd34c4" Namespace="calico-system" Pod="whisker-759c5b6477-kxt5n" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-whisker--759c5b6477--kxt5n-eth0" Jan 22 00:45:31.875523 containerd[2460]: 2026-01-22 00:45:31.849 [INFO][5109] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a8f77287a9493e02a742cdbe43daa552a0b781bc57a64496feb6f515b7cd34c4" Namespace="calico-system" Pod="whisker-759c5b6477-kxt5n" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-whisker--759c5b6477--kxt5n-eth0" Jan 22 00:45:31.875571 containerd[2460]: 2026-01-22 00:45:31.850 [INFO][5109] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a8f77287a9493e02a742cdbe43daa552a0b781bc57a64496feb6f515b7cd34c4" Namespace="calico-system" Pod="whisker-759c5b6477-kxt5n" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-whisker--759c5b6477--kxt5n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--d879fbfda5-k8s-whisker--759c5b6477--kxt5n-eth0", GenerateName:"whisker-759c5b6477-", Namespace:"calico-system", SelfLink:"", UID:"c7ca39d8-cd5c-4ad6-a84c-aac52a3306f1", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 45, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"759c5b6477", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-d879fbfda5", ContainerID:"a8f77287a9493e02a742cdbe43daa552a0b781bc57a64496feb6f515b7cd34c4", Pod:"whisker-759c5b6477-kxt5n", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.22.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid8c7dc89fcf", MAC:"f2:69:f6:d3:8c:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:45:31.876355 containerd[2460]: 2026-01-22 00:45:31.870 [INFO][5109] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a8f77287a9493e02a742cdbe43daa552a0b781bc57a64496feb6f515b7cd34c4" Namespace="calico-system" Pod="whisker-759c5b6477-kxt5n" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-whisker--759c5b6477--kxt5n-eth0" Jan 22 00:45:31.939452 containerd[2460]: time="2026-01-22T00:45:31.939410826Z" level=info msg="connecting to shim a8f77287a9493e02a742cdbe43daa552a0b781bc57a64496feb6f515b7cd34c4" address="unix:///run/containerd/s/04277abbe2dcafbcd5572f8d5b5e25b6b8b8a4696caf6c4b277916b743b94d91" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:45:31.976439 systemd[1]: Started cri-containerd-a8f77287a9493e02a742cdbe43daa552a0b781bc57a64496feb6f515b7cd34c4.scope - libcontainer container a8f77287a9493e02a742cdbe43daa552a0b781bc57a64496feb6f515b7cd34c4. Jan 22 00:45:31.997000 audit: BPF prog-id=198 op=LOAD Jan 22 00:45:31.997000 audit: BPF prog-id=199 op=LOAD Jan 22 00:45:31.997000 audit[5159]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5148 pid=5159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:31.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138663737323837613934393365303261373432636462653433646161 Jan 22 00:45:31.997000 audit: BPF prog-id=199 op=UNLOAD Jan 22 00:45:31.997000 audit[5159]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5148 pid=5159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:31.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138663737323837613934393365303261373432636462653433646161 Jan 22 00:45:31.997000 audit: BPF prog-id=200 op=LOAD Jan 22 00:45:31.997000 audit[5159]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5148 pid=5159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:31.997000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138663737323837613934393365303261373432636462653433646161 Jan 22 00:45:31.998000 audit: BPF prog-id=201 op=LOAD Jan 22 00:45:31.998000 audit[5159]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5148 pid=5159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:31.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138663737323837613934393365303261373432636462653433646161 Jan 22 00:45:31.998000 audit: BPF prog-id=201 op=UNLOAD Jan 22 00:45:31.998000 audit[5159]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5148 pid=5159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:31.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138663737323837613934393365303261373432636462653433646161 Jan 22 00:45:31.998000 audit: BPF prog-id=200 op=UNLOAD Jan 22 00:45:31.998000 audit[5159]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5148 pid=5159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:31.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138663737323837613934393365303261373432636462653433646161 Jan 22 00:45:31.998000 audit: BPF prog-id=202 op=LOAD Jan 22 00:45:31.998000 audit[5159]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5148 pid=5159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:31.998000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138663737323837613934393365303261373432636462653433646161 Jan 22 00:45:32.044941 containerd[2460]: time="2026-01-22T00:45:32.044888166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-759c5b6477-kxt5n,Uid:c7ca39d8-cd5c-4ad6-a84c-aac52a3306f1,Namespace:calico-system,Attempt:0,} returns sandbox id \"a8f77287a9493e02a742cdbe43daa552a0b781bc57a64496feb6f515b7cd34c4\"" Jan 22 00:45:32.047771 containerd[2460]: time="2026-01-22T00:45:32.047715344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 22 00:45:32.068000 audit: BPF prog-id=203 op=LOAD Jan 22 00:45:32.068000 audit[5211]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc842c3770 a2=98 a3=1fffffffffffffff items=0 ppid=5048 pid=5211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.068000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 22 00:45:32.068000 audit: BPF prog-id=203 op=UNLOAD Jan 22 00:45:32.068000 audit[5211]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc842c3740 a3=0 items=0 ppid=5048 pid=5211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.068000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 22 00:45:32.068000 audit: BPF prog-id=204 op=LOAD Jan 22 00:45:32.068000 audit[5211]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc842c3650 a2=94 a3=3 items=0 ppid=5048 pid=5211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.068000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 22 00:45:32.068000 audit: BPF prog-id=204 op=UNLOAD Jan 22 00:45:32.068000 audit[5211]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc842c3650 a2=94 a3=3 items=0 ppid=5048 pid=5211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.068000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 22 00:45:32.068000 audit: BPF prog-id=205 op=LOAD Jan 22 00:45:32.068000 audit[5211]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc842c3690 a2=94 a3=7ffc842c3870 items=0 ppid=5048 pid=5211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.068000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 22 00:45:32.068000 audit: BPF prog-id=205 op=UNLOAD Jan 22 00:45:32.068000 audit[5211]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc842c3690 a2=94 a3=7ffc842c3870 items=0 ppid=5048 pid=5211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.068000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 22 00:45:32.070000 audit: BPF prog-id=206 op=LOAD Jan 22 00:45:32.070000 audit[5212]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe68fe4170 a2=98 a3=3 items=0 ppid=5048 pid=5212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.070000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:45:32.070000 audit: BPF prog-id=206 op=UNLOAD Jan 22 00:45:32.070000 audit[5212]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe68fe4140 a3=0 items=0 ppid=5048 pid=5212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.070000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:45:32.070000 audit: BPF prog-id=207 op=LOAD Jan 22 00:45:32.070000 audit[5212]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe68fe3f60 a2=94 a3=54428f items=0 ppid=5048 pid=5212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.070000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:45:32.070000 audit: BPF prog-id=207 op=UNLOAD Jan 22 00:45:32.070000 audit[5212]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe68fe3f60 a2=94 a3=54428f items=0 ppid=5048 pid=5212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.070000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:45:32.070000 audit: BPF prog-id=208 op=LOAD Jan 22 00:45:32.070000 audit[5212]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe68fe3f90 a2=94 a3=2 items=0 ppid=5048 pid=5212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.070000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:45:32.070000 audit: BPF prog-id=208 op=UNLOAD Jan 22 00:45:32.070000 audit[5212]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe68fe3f90 a2=0 a3=2 items=0 ppid=5048 pid=5212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.070000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:45:32.191000 audit: BPF prog-id=209 op=LOAD Jan 22 00:45:32.191000 audit[5212]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe68fe3e50 a2=94 a3=1 items=0 ppid=5048 pid=5212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.191000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:45:32.191000 audit: BPF prog-id=209 op=UNLOAD Jan 22 00:45:32.191000 audit[5212]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe68fe3e50 a2=94 a3=1 items=0 ppid=5048 pid=5212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.191000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:45:32.199000 audit: BPF prog-id=210 op=LOAD Jan 22 00:45:32.199000 audit[5212]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe68fe3e40 a2=94 a3=4 items=0 ppid=5048 pid=5212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.199000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:45:32.199000 audit: BPF prog-id=210 op=UNLOAD Jan 22 00:45:32.199000 audit[5212]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe68fe3e40 a2=0 a3=4 items=0 ppid=5048 pid=5212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.199000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:45:32.200000 audit: BPF prog-id=211 op=LOAD Jan 22 00:45:32.200000 audit[5212]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe68fe3ca0 a2=94 a3=5 items=0 ppid=5048 pid=5212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.200000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:45:32.200000 audit: BPF prog-id=211 op=UNLOAD Jan 22 00:45:32.200000 audit[5212]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe68fe3ca0 a2=0 a3=5 items=0 ppid=5048 pid=5212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.200000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:45:32.200000 audit: BPF prog-id=212 op=LOAD Jan 22 00:45:32.200000 audit[5212]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe68fe3ec0 a2=94 a3=6 items=0 ppid=5048 pid=5212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.200000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:45:32.200000 audit: BPF prog-id=212 op=UNLOAD Jan 22 00:45:32.200000 audit[5212]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe68fe3ec0 a2=0 a3=6 items=0 ppid=5048 pid=5212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.200000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:45:32.200000 audit: BPF prog-id=213 op=LOAD Jan 22 00:45:32.200000 audit[5212]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe68fe3670 a2=94 a3=88 items=0 ppid=5048 pid=5212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.200000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:45:32.200000 audit: BPF prog-id=214 op=LOAD Jan 22 00:45:32.200000 audit[5212]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffe68fe34f0 a2=94 a3=2 items=0 ppid=5048 pid=5212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.200000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:45:32.200000 audit: BPF prog-id=214 op=UNLOAD Jan 22 00:45:32.200000 audit[5212]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffe68fe3520 a2=0 a3=7ffe68fe3620 items=0 ppid=5048 pid=5212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.200000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:45:32.201000 audit: BPF prog-id=213 op=UNLOAD Jan 22 00:45:32.201000 audit[5212]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=3aaf2d10 a2=0 a3=bc8328f310a93e6e items=0 ppid=5048 pid=5212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.201000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 22 00:45:32.207000 audit: BPF prog-id=215 op=LOAD Jan 22 00:45:32.207000 audit[5215]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd46b45450 a2=98 a3=1999999999999999 items=0 ppid=5048 pid=5215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.207000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 22 00:45:32.208000 audit: BPF prog-id=215 op=UNLOAD Jan 22 00:45:32.208000 audit[5215]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd46b45420 a3=0 items=0 ppid=5048 pid=5215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.208000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 22 00:45:32.208000 audit: BPF prog-id=216 op=LOAD Jan 22 00:45:32.208000 audit[5215]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd46b45330 a2=94 a3=ffff items=0 ppid=5048 pid=5215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.208000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 22 00:45:32.208000 audit: BPF prog-id=216 op=UNLOAD Jan 22 00:45:32.208000 audit[5215]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd46b45330 a2=94 a3=ffff items=0 ppid=5048 pid=5215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.208000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 22 00:45:32.208000 audit: BPF prog-id=217 op=LOAD Jan 22 00:45:32.208000 audit[5215]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd46b45370 a2=94 a3=7ffd46b45550 items=0 ppid=5048 pid=5215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.208000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 22 00:45:32.208000 audit: BPF prog-id=217 op=UNLOAD Jan 22 00:45:32.208000 audit[5215]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd46b45370 a2=94 a3=7ffd46b45550 items=0 ppid=5048 pid=5215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.208000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 22 00:45:32.264559 systemd-networkd[2242]: vxlan.calico: Link UP Jan 22 00:45:32.264571 systemd-networkd[2242]: vxlan.calico: Gained carrier Jan 22 00:45:32.282000 audit: BPF prog-id=218 op=LOAD Jan 22 00:45:32.282000 audit[5238]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff31068ae0 a2=98 a3=0 items=0 ppid=5048 pid=5238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.282000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:45:32.282000 audit: BPF prog-id=218 op=UNLOAD Jan 22 00:45:32.282000 audit[5238]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff31068ab0 a3=0 items=0 ppid=5048 pid=5238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.282000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:45:32.282000 audit: BPF prog-id=219 op=LOAD Jan 22 00:45:32.282000 audit[5238]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff310688f0 a2=94 a3=54428f items=0 ppid=5048 pid=5238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.282000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:45:32.283000 audit: BPF prog-id=219 op=UNLOAD Jan 22 00:45:32.283000 audit[5238]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff310688f0 a2=94 a3=54428f items=0 ppid=5048 pid=5238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.283000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:45:32.283000 audit: BPF prog-id=220 op=LOAD Jan 22 00:45:32.283000 audit[5238]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff31068920 a2=94 a3=2 items=0 ppid=5048 pid=5238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.283000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:45:32.283000 audit: BPF prog-id=220 op=UNLOAD Jan 22 00:45:32.283000 audit[5238]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff31068920 a2=0 a3=2 items=0 ppid=5048 pid=5238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.283000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:45:32.283000 audit: BPF prog-id=221 op=LOAD Jan 22 00:45:32.283000 audit[5238]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff310686d0 a2=94 a3=4 items=0 ppid=5048 pid=5238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.283000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:45:32.283000 audit: BPF prog-id=221 op=UNLOAD Jan 22 00:45:32.283000 audit[5238]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff310686d0 a2=94 a3=4 items=0 ppid=5048 pid=5238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.283000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:45:32.283000 audit: BPF prog-id=222 op=LOAD Jan 22 00:45:32.283000 audit[5238]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff310687d0 a2=94 a3=7fff31068950 items=0 ppid=5048 pid=5238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.283000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:45:32.283000 audit: BPF prog-id=222 op=UNLOAD Jan 22 00:45:32.283000 audit[5238]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff310687d0 a2=0 a3=7fff31068950 items=0 ppid=5048 pid=5238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.283000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:45:32.285000 audit: BPF prog-id=223 op=LOAD Jan 22 00:45:32.285000 audit[5238]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff31067f00 a2=94 a3=2 items=0 ppid=5048 pid=5238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.285000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:45:32.285000 audit: BPF prog-id=223 op=UNLOAD Jan 22 00:45:32.285000 audit[5238]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff31067f00 a2=0 a3=2 items=0 ppid=5048 pid=5238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.285000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:45:32.285000 audit: BPF prog-id=224 op=LOAD Jan 22 00:45:32.285000 audit[5238]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff31068000 a2=94 a3=30 items=0 ppid=5048 pid=5238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.285000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 22 00:45:32.292000 audit: BPF prog-id=225 op=LOAD Jan 22 00:45:32.292000 audit[5244]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffee89f120 a2=98 a3=0 items=0 ppid=5048 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.292000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:45:32.292000 audit: BPF prog-id=225 op=UNLOAD Jan 22 00:45:32.292000 audit[5244]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fffee89f0f0 a3=0 items=0 ppid=5048 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.292000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:45:32.292000 audit: BPF prog-id=226 op=LOAD Jan 22 00:45:32.292000 audit[5244]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffee89ef10 a2=94 a3=54428f items=0 ppid=5048 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.292000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:45:32.292000 audit: BPF prog-id=226 op=UNLOAD Jan 22 00:45:32.292000 audit[5244]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffee89ef10 a2=94 a3=54428f items=0 ppid=5048 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.292000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:45:32.292000 audit: BPF prog-id=227 op=LOAD Jan 22 00:45:32.292000 audit[5244]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffee89ef40 a2=94 a3=2 items=0 ppid=5048 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.292000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:45:32.292000 audit: BPF prog-id=227 op=UNLOAD Jan 22 00:45:32.292000 audit[5244]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffee89ef40 a2=0 a3=2 items=0 ppid=5048 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.292000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:45:32.298417 containerd[2460]: time="2026-01-22T00:45:32.298212286Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:45:32.300984 containerd[2460]: time="2026-01-22T00:45:32.300951820Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 22 00:45:32.301700 containerd[2460]: time="2026-01-22T00:45:32.300978122Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 22 00:45:32.301776 kubelet[3924]: E0122 00:45:32.301269 3924 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 22 00:45:32.301776 kubelet[3924]: E0122 00:45:32.301321 3924 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 22 00:45:32.301857 kubelet[3924]: E0122 00:45:32.301474 3924 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:27c137ff87524ee5bb8d79863905fd63,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dckdv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-759c5b6477-kxt5n_calico-system(c7ca39d8-cd5c-4ad6-a84c-aac52a3306f1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 22 00:45:32.304433 containerd[2460]: time="2026-01-22T00:45:32.304409630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 22 00:45:32.420000 audit: BPF prog-id=228 op=LOAD Jan 22 00:45:32.420000 audit[5244]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffee89ee00 a2=94 a3=1 items=0 ppid=5048 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.420000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:45:32.420000 audit: BPF prog-id=228 op=UNLOAD Jan 22 00:45:32.420000 audit[5244]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffee89ee00 a2=94 a3=1 items=0 ppid=5048 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.420000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:45:32.429000 audit: BPF prog-id=229 op=LOAD Jan 22 00:45:32.429000 audit[5244]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffee89edf0 a2=94 a3=4 items=0 ppid=5048 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.429000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:45:32.429000 audit: BPF prog-id=229 op=UNLOAD Jan 22 00:45:32.429000 audit[5244]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fffee89edf0 a2=0 a3=4 items=0 ppid=5048 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.429000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:45:32.430000 audit: BPF prog-id=230 op=LOAD Jan 22 00:45:32.430000 audit[5244]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffee89ec50 a2=94 a3=5 items=0 ppid=5048 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.430000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:45:32.430000 audit: BPF prog-id=230 op=UNLOAD Jan 22 00:45:32.430000 audit[5244]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fffee89ec50 a2=0 a3=5 items=0 ppid=5048 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.430000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:45:32.430000 audit: BPF prog-id=231 op=LOAD Jan 22 00:45:32.430000 audit[5244]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffee89ee70 a2=94 a3=6 items=0 ppid=5048 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.430000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:45:32.430000 audit: BPF prog-id=231 op=UNLOAD Jan 22 00:45:32.430000 audit[5244]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fffee89ee70 a2=0 a3=6 items=0 ppid=5048 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.430000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:45:32.430000 audit: BPF prog-id=232 op=LOAD Jan 22 00:45:32.430000 audit[5244]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffee89e620 a2=94 a3=88 items=0 ppid=5048 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.430000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:45:32.430000 audit: BPF prog-id=233 op=LOAD Jan 22 00:45:32.430000 audit[5244]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fffee89e4a0 a2=94 a3=2 items=0 ppid=5048 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.430000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:45:32.430000 audit: BPF prog-id=233 op=UNLOAD Jan 22 00:45:32.430000 audit[5244]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fffee89e4d0 a2=0 a3=7fffee89e5d0 items=0 ppid=5048 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.430000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:45:32.431000 audit: BPF prog-id=232 op=UNLOAD Jan 22 00:45:32.431000 audit[5244]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=1543fd10 a2=0 a3=f9f6f08b0d58a901 items=0 ppid=5048 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.431000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 22 00:45:32.437000 audit: BPF prog-id=224 op=UNLOAD Jan 22 00:45:32.437000 audit[5048]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c00091c300 a2=0 a3=0 items=0 ppid=5030 pid=5048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.437000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 22 00:45:32.488000 audit[5268]: NETFILTER_CFG table=nat:122 family=2 entries=15 op=nft_register_chain pid=5268 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 00:45:32.488000 audit[5268]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffdf00c1390 a2=0 a3=7ffdf00c137c items=0 ppid=5048 pid=5268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.488000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 00:45:32.489000 audit[5269]: NETFILTER_CFG table=mangle:123 family=2 entries=16 op=nft_register_chain pid=5269 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 00:45:32.489000 audit[5269]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffed0d0f2c0 a2=0 a3=7ffed0d0f2ac items=0 ppid=5048 pid=5269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.489000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 00:45:32.500000 audit[5266]: NETFILTER_CFG table=raw:124 family=2 entries=21 op=nft_register_chain pid=5266 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 00:45:32.500000 audit[5266]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffc798ae0b0 a2=0 a3=7ffc798ae09c items=0 ppid=5048 pid=5266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.500000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 00:45:32.503000 audit[5273]: NETFILTER_CFG table=filter:125 family=2 entries=94 op=nft_register_chain pid=5273 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 00:45:32.503000 audit[5273]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffc822d58f0 a2=0 a3=7ffc822d58dc items=0 ppid=5048 pid=5273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:32.503000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 00:45:32.544237 containerd[2460]: time="2026-01-22T00:45:32.544205456Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:45:32.546961 containerd[2460]: time="2026-01-22T00:45:32.546901358Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 22 00:45:32.546961 containerd[2460]: time="2026-01-22T00:45:32.546944102Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 22 00:45:32.547123 kubelet[3924]: E0122 00:45:32.547092 3924 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 22 00:45:32.547389 kubelet[3924]: E0122 00:45:32.547137 3924 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 22 00:45:32.547504 kubelet[3924]: E0122 00:45:32.547453 3924 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dckdv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-759c5b6477-kxt5n_calico-system(c7ca39d8-cd5c-4ad6-a84c-aac52a3306f1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 22 00:45:32.549166 kubelet[3924]: E0122 00:45:32.549130 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-759c5b6477-kxt5n" podUID="c7ca39d8-cd5c-4ad6-a84c-aac52a3306f1" Jan 22 00:45:33.186437 kubelet[3924]: I0122 00:45:33.186394 3924 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e056b93-8ad4-41ba-8df7-4e9107c9c36c" path="/var/lib/kubelet/pods/1e056b93-8ad4-41ba-8df7-4e9107c9c36c/volumes" Jan 22 00:45:33.315998 systemd-networkd[2242]: calid8c7dc89fcf: Gained IPv6LL Jan 22 00:45:33.328951 kubelet[3924]: E0122 00:45:33.328903 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-759c5b6477-kxt5n" podUID="c7ca39d8-cd5c-4ad6-a84c-aac52a3306f1" Jan 22 00:45:33.351000 audit[5284]: NETFILTER_CFG table=filter:126 family=2 entries=20 op=nft_register_rule pid=5284 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:45:33.351000 audit[5284]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc213c3f80 a2=0 a3=7ffc213c3f6c items=0 ppid=4073 pid=5284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:33.351000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:45:33.358000 audit[5284]: NETFILTER_CFG table=nat:127 family=2 entries=14 op=nft_register_rule pid=5284 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:45:33.358000 audit[5284]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc213c3f80 a2=0 a3=0 items=0 ppid=4073 pid=5284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:33.358000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:45:33.635886 systemd-networkd[2242]: vxlan.calico: Gained IPv6LL Jan 22 00:45:34.185261 containerd[2460]: time="2026-01-22T00:45:34.185217028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68764f557f-l8tpm,Uid:dd066ec6-7b8c-4975-b067-940020b582cf,Namespace:calico-apiserver,Attempt:0,}" Jan 22 00:45:34.277681 systemd-networkd[2242]: cali26835b9dcf2: Link UP Jan 22 00:45:34.278043 systemd-networkd[2242]: cali26835b9dcf2: Gained carrier Jan 22 00:45:34.291788 containerd[2460]: 2026-01-22 00:45:34.224 [INFO][5287] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--n--d879fbfda5-k8s-calico--apiserver--68764f557f--l8tpm-eth0 calico-apiserver-68764f557f- calico-apiserver dd066ec6-7b8c-4975-b067-940020b582cf 806 0 2026-01-22 00:45:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68764f557f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515.1.0-n-d879fbfda5 calico-apiserver-68764f557f-l8tpm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali26835b9dcf2 [] [] }} ContainerID="0e99dd8c87b70d6e8888bd13cf7b2b1c86379f1ca7bec1455b1759742ad23b3d" Namespace="calico-apiserver" Pod="calico-apiserver-68764f557f-l8tpm" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-calico--apiserver--68764f557f--l8tpm-" Jan 22 00:45:34.291788 containerd[2460]: 2026-01-22 00:45:34.224 [INFO][5287] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0e99dd8c87b70d6e8888bd13cf7b2b1c86379f1ca7bec1455b1759742ad23b3d" Namespace="calico-apiserver" Pod="calico-apiserver-68764f557f-l8tpm" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-calico--apiserver--68764f557f--l8tpm-eth0" Jan 22 00:45:34.291788 containerd[2460]: 2026-01-22 00:45:34.245 [INFO][5300] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0e99dd8c87b70d6e8888bd13cf7b2b1c86379f1ca7bec1455b1759742ad23b3d" HandleID="k8s-pod-network.0e99dd8c87b70d6e8888bd13cf7b2b1c86379f1ca7bec1455b1759742ad23b3d" Workload="ci--4515.1.0--n--d879fbfda5-k8s-calico--apiserver--68764f557f--l8tpm-eth0" Jan 22 00:45:34.292009 containerd[2460]: 2026-01-22 00:45:34.245 [INFO][5300] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0e99dd8c87b70d6e8888bd13cf7b2b1c86379f1ca7bec1455b1759742ad23b3d" HandleID="k8s-pod-network.0e99dd8c87b70d6e8888bd13cf7b2b1c86379f1ca7bec1455b1759742ad23b3d" Workload="ci--4515.1.0--n--d879fbfda5-k8s-calico--apiserver--68764f557f--l8tpm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5800), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515.1.0-n-d879fbfda5", "pod":"calico-apiserver-68764f557f-l8tpm", "timestamp":"2026-01-22 00:45:34.245660596 +0000 UTC"}, Hostname:"ci-4515.1.0-n-d879fbfda5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 00:45:34.292009 containerd[2460]: 2026-01-22 00:45:34.245 [INFO][5300] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 00:45:34.292009 containerd[2460]: 2026-01-22 00:45:34.245 [INFO][5300] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 00:45:34.292009 containerd[2460]: 2026-01-22 00:45:34.245 [INFO][5300] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-n-d879fbfda5' Jan 22 00:45:34.292009 containerd[2460]: 2026-01-22 00:45:34.250 [INFO][5300] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0e99dd8c87b70d6e8888bd13cf7b2b1c86379f1ca7bec1455b1759742ad23b3d" host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:34.292009 containerd[2460]: 2026-01-22 00:45:34.253 [INFO][5300] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:34.292009 containerd[2460]: 2026-01-22 00:45:34.256 [INFO][5300] ipam/ipam.go 511: Trying affinity for 192.168.22.128/26 host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:34.292009 containerd[2460]: 2026-01-22 00:45:34.258 [INFO][5300] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.128/26 host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:34.292009 containerd[2460]: 2026-01-22 00:45:34.259 [INFO][5300] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.128/26 host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:34.292242 containerd[2460]: 2026-01-22 00:45:34.259 [INFO][5300] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.22.128/26 handle="k8s-pod-network.0e99dd8c87b70d6e8888bd13cf7b2b1c86379f1ca7bec1455b1759742ad23b3d" host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:34.292242 containerd[2460]: 2026-01-22 00:45:34.261 [INFO][5300] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0e99dd8c87b70d6e8888bd13cf7b2b1c86379f1ca7bec1455b1759742ad23b3d Jan 22 00:45:34.292242 containerd[2460]: 2026-01-22 00:45:34.264 [INFO][5300] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.22.128/26 handle="k8s-pod-network.0e99dd8c87b70d6e8888bd13cf7b2b1c86379f1ca7bec1455b1759742ad23b3d" host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:34.292242 containerd[2460]: 2026-01-22 00:45:34.272 [INFO][5300] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.22.130/26] block=192.168.22.128/26 handle="k8s-pod-network.0e99dd8c87b70d6e8888bd13cf7b2b1c86379f1ca7bec1455b1759742ad23b3d" host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:34.292242 containerd[2460]: 2026-01-22 00:45:34.272 [INFO][5300] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.130/26] handle="k8s-pod-network.0e99dd8c87b70d6e8888bd13cf7b2b1c86379f1ca7bec1455b1759742ad23b3d" host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:34.292242 containerd[2460]: 2026-01-22 00:45:34.272 [INFO][5300] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 00:45:34.292242 containerd[2460]: 2026-01-22 00:45:34.272 [INFO][5300] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.22.130/26] IPv6=[] ContainerID="0e99dd8c87b70d6e8888bd13cf7b2b1c86379f1ca7bec1455b1759742ad23b3d" HandleID="k8s-pod-network.0e99dd8c87b70d6e8888bd13cf7b2b1c86379f1ca7bec1455b1759742ad23b3d" Workload="ci--4515.1.0--n--d879fbfda5-k8s-calico--apiserver--68764f557f--l8tpm-eth0" Jan 22 00:45:34.292408 containerd[2460]: 2026-01-22 00:45:34.274 [INFO][5287] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0e99dd8c87b70d6e8888bd13cf7b2b1c86379f1ca7bec1455b1759742ad23b3d" Namespace="calico-apiserver" Pod="calico-apiserver-68764f557f-l8tpm" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-calico--apiserver--68764f557f--l8tpm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--d879fbfda5-k8s-calico--apiserver--68764f557f--l8tpm-eth0", GenerateName:"calico-apiserver-68764f557f-", Namespace:"calico-apiserver", SelfLink:"", UID:"dd066ec6-7b8c-4975-b067-940020b582cf", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 45, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68764f557f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-d879fbfda5", ContainerID:"", Pod:"calico-apiserver-68764f557f-l8tpm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali26835b9dcf2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:45:34.293157 containerd[2460]: 2026-01-22 00:45:34.274 [INFO][5287] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.130/32] ContainerID="0e99dd8c87b70d6e8888bd13cf7b2b1c86379f1ca7bec1455b1759742ad23b3d" Namespace="calico-apiserver" Pod="calico-apiserver-68764f557f-l8tpm" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-calico--apiserver--68764f557f--l8tpm-eth0" Jan 22 00:45:34.293157 containerd[2460]: 2026-01-22 00:45:34.274 [INFO][5287] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali26835b9dcf2 ContainerID="0e99dd8c87b70d6e8888bd13cf7b2b1c86379f1ca7bec1455b1759742ad23b3d" Namespace="calico-apiserver" Pod="calico-apiserver-68764f557f-l8tpm" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-calico--apiserver--68764f557f--l8tpm-eth0" Jan 22 00:45:34.293157 containerd[2460]: 2026-01-22 00:45:34.277 [INFO][5287] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0e99dd8c87b70d6e8888bd13cf7b2b1c86379f1ca7bec1455b1759742ad23b3d" Namespace="calico-apiserver" Pod="calico-apiserver-68764f557f-l8tpm" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-calico--apiserver--68764f557f--l8tpm-eth0" Jan 22 00:45:34.293254 containerd[2460]: 2026-01-22 00:45:34.277 [INFO][5287] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0e99dd8c87b70d6e8888bd13cf7b2b1c86379f1ca7bec1455b1759742ad23b3d" Namespace="calico-apiserver" Pod="calico-apiserver-68764f557f-l8tpm" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-calico--apiserver--68764f557f--l8tpm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--d879fbfda5-k8s-calico--apiserver--68764f557f--l8tpm-eth0", GenerateName:"calico-apiserver-68764f557f-", Namespace:"calico-apiserver", SelfLink:"", UID:"dd066ec6-7b8c-4975-b067-940020b582cf", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 45, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68764f557f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-d879fbfda5", ContainerID:"0e99dd8c87b70d6e8888bd13cf7b2b1c86379f1ca7bec1455b1759742ad23b3d", Pod:"calico-apiserver-68764f557f-l8tpm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali26835b9dcf2", MAC:"1e:b1:09:ec:f6:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:45:34.293324 containerd[2460]: 2026-01-22 00:45:34.288 [INFO][5287] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0e99dd8c87b70d6e8888bd13cf7b2b1c86379f1ca7bec1455b1759742ad23b3d" Namespace="calico-apiserver" Pod="calico-apiserver-68764f557f-l8tpm" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-calico--apiserver--68764f557f--l8tpm-eth0" Jan 22 00:45:34.303000 audit[5314]: NETFILTER_CFG table=filter:128 family=2 entries=50 op=nft_register_chain pid=5314 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 00:45:34.303000 audit[5314]: SYSCALL arch=c000003e syscall=46 success=yes exit=28208 a0=3 a1=7ffc9d250210 a2=0 a3=7ffc9d2501fc items=0 ppid=5048 pid=5314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:34.303000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 00:45:34.327764 containerd[2460]: time="2026-01-22T00:45:34.327239391Z" level=info msg="connecting to shim 0e99dd8c87b70d6e8888bd13cf7b2b1c86379f1ca7bec1455b1759742ad23b3d" address="unix:///run/containerd/s/0c92c984314c1d7285d473efe8959a1f94ff630b3d1451657c00647fdfb08afb" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:45:34.349920 systemd[1]: Started cri-containerd-0e99dd8c87b70d6e8888bd13cf7b2b1c86379f1ca7bec1455b1759742ad23b3d.scope - libcontainer container 0e99dd8c87b70d6e8888bd13cf7b2b1c86379f1ca7bec1455b1759742ad23b3d. Jan 22 00:45:34.358000 audit: BPF prog-id=234 op=LOAD Jan 22 00:45:34.358000 audit: BPF prog-id=235 op=LOAD Jan 22 00:45:34.358000 audit[5334]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5323 pid=5334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:34.358000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065393964643863383762373064366538383838626431336366376232 Jan 22 00:45:34.358000 audit: BPF prog-id=235 op=UNLOAD Jan 22 00:45:34.358000 audit[5334]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5323 pid=5334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:34.358000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065393964643863383762373064366538383838626431336366376232 Jan 22 00:45:34.358000 audit: BPF prog-id=236 op=LOAD Jan 22 00:45:34.358000 audit[5334]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5323 pid=5334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:34.358000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065393964643863383762373064366538383838626431336366376232 Jan 22 00:45:34.358000 audit: BPF prog-id=237 op=LOAD Jan 22 00:45:34.358000 audit[5334]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5323 pid=5334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:34.358000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065393964643863383762373064366538383838626431336366376232 Jan 22 00:45:34.359000 audit: BPF prog-id=237 op=UNLOAD Jan 22 00:45:34.359000 audit[5334]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5323 pid=5334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:34.359000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065393964643863383762373064366538383838626431336366376232 Jan 22 00:45:34.359000 audit: BPF prog-id=236 op=UNLOAD Jan 22 00:45:34.359000 audit[5334]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5323 pid=5334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:34.359000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065393964643863383762373064366538383838626431336366376232 Jan 22 00:45:34.359000 audit: BPF prog-id=238 op=LOAD Jan 22 00:45:34.359000 audit[5334]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5323 pid=5334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:34.359000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065393964643863383762373064366538383838626431336366376232 Jan 22 00:45:34.392114 containerd[2460]: time="2026-01-22T00:45:34.392084899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68764f557f-l8tpm,Uid:dd066ec6-7b8c-4975-b067-940020b582cf,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0e99dd8c87b70d6e8888bd13cf7b2b1c86379f1ca7bec1455b1759742ad23b3d\"" Jan 22 00:45:34.393400 containerd[2460]: time="2026-01-22T00:45:34.393373711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 22 00:45:34.632008 containerd[2460]: time="2026-01-22T00:45:34.631960610Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:45:34.635652 containerd[2460]: time="2026-01-22T00:45:34.635617432Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 22 00:45:34.635712 containerd[2460]: time="2026-01-22T00:45:34.635695326Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 22 00:45:34.635904 kubelet[3924]: E0122 00:45:34.635871 3924 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:45:34.636211 kubelet[3924]: E0122 00:45:34.635921 3924 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:45:34.636211 kubelet[3924]: E0122 00:45:34.636072 3924 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f8jng,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68764f557f-l8tpm_calico-apiserver(dd066ec6-7b8c-4975-b067-940020b582cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 22 00:45:34.637535 kubelet[3924]: E0122 00:45:34.637493 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68764f557f-l8tpm" podUID="dd066ec6-7b8c-4975-b067-940020b582cf" Jan 22 00:45:35.185939 containerd[2460]: time="2026-01-22T00:45:35.185863566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ctkzj,Uid:9f7bd783-d45a-483f-ab06-4c0a65b60ba1,Namespace:kube-system,Attempt:0,}" Jan 22 00:45:35.286389 systemd-networkd[2242]: cali7bc178b1472: Link UP Jan 22 00:45:35.287113 systemd-networkd[2242]: cali7bc178b1472: Gained carrier Jan 22 00:45:35.299240 containerd[2460]: 2026-01-22 00:45:35.234 [INFO][5358] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--n--d879fbfda5-k8s-coredns--668d6bf9bc--ctkzj-eth0 coredns-668d6bf9bc- kube-system 9f7bd783-d45a-483f-ab06-4c0a65b60ba1 805 0 2026-01-22 00:44:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515.1.0-n-d879fbfda5 coredns-668d6bf9bc-ctkzj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7bc178b1472 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f70a2db28e564e026e0ba1213fcaf1b9c066553c04166f9dfa87a99e1ff13dd4" Namespace="kube-system" Pod="coredns-668d6bf9bc-ctkzj" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-coredns--668d6bf9bc--ctkzj-" Jan 22 00:45:35.299240 containerd[2460]: 2026-01-22 00:45:35.234 [INFO][5358] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f70a2db28e564e026e0ba1213fcaf1b9c066553c04166f9dfa87a99e1ff13dd4" Namespace="kube-system" Pod="coredns-668d6bf9bc-ctkzj" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-coredns--668d6bf9bc--ctkzj-eth0" Jan 22 00:45:35.299240 containerd[2460]: 2026-01-22 00:45:35.258 [INFO][5370] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f70a2db28e564e026e0ba1213fcaf1b9c066553c04166f9dfa87a99e1ff13dd4" HandleID="k8s-pod-network.f70a2db28e564e026e0ba1213fcaf1b9c066553c04166f9dfa87a99e1ff13dd4" Workload="ci--4515.1.0--n--d879fbfda5-k8s-coredns--668d6bf9bc--ctkzj-eth0" Jan 22 00:45:35.299659 containerd[2460]: 2026-01-22 00:45:35.258 [INFO][5370] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f70a2db28e564e026e0ba1213fcaf1b9c066553c04166f9dfa87a99e1ff13dd4" HandleID="k8s-pod-network.f70a2db28e564e026e0ba1213fcaf1b9c066553c04166f9dfa87a99e1ff13dd4" Workload="ci--4515.1.0--n--d879fbfda5-k8s-coredns--668d6bf9bc--ctkzj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ef0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515.1.0-n-d879fbfda5", "pod":"coredns-668d6bf9bc-ctkzj", "timestamp":"2026-01-22 00:45:35.258079936 +0000 UTC"}, Hostname:"ci-4515.1.0-n-d879fbfda5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 00:45:35.299659 containerd[2460]: 2026-01-22 00:45:35.258 [INFO][5370] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 00:45:35.299659 containerd[2460]: 2026-01-22 00:45:35.258 [INFO][5370] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 00:45:35.299659 containerd[2460]: 2026-01-22 00:45:35.258 [INFO][5370] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-n-d879fbfda5' Jan 22 00:45:35.299659 containerd[2460]: 2026-01-22 00:45:35.263 [INFO][5370] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f70a2db28e564e026e0ba1213fcaf1b9c066553c04166f9dfa87a99e1ff13dd4" host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:35.299659 containerd[2460]: 2026-01-22 00:45:35.266 [INFO][5370] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:35.299659 containerd[2460]: 2026-01-22 00:45:35.269 [INFO][5370] ipam/ipam.go 511: Trying affinity for 192.168.22.128/26 host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:35.299659 containerd[2460]: 2026-01-22 00:45:35.270 [INFO][5370] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.128/26 host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:35.299659 containerd[2460]: 2026-01-22 00:45:35.272 [INFO][5370] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.128/26 host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:35.300876 containerd[2460]: 2026-01-22 00:45:35.272 [INFO][5370] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.22.128/26 handle="k8s-pod-network.f70a2db28e564e026e0ba1213fcaf1b9c066553c04166f9dfa87a99e1ff13dd4" host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:35.300876 containerd[2460]: 2026-01-22 00:45:35.274 [INFO][5370] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f70a2db28e564e026e0ba1213fcaf1b9c066553c04166f9dfa87a99e1ff13dd4 Jan 22 00:45:35.300876 containerd[2460]: 2026-01-22 00:45:35.277 [INFO][5370] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.22.128/26 handle="k8s-pod-network.f70a2db28e564e026e0ba1213fcaf1b9c066553c04166f9dfa87a99e1ff13dd4" host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:35.300876 containerd[2460]: 2026-01-22 00:45:35.282 [INFO][5370] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.22.131/26] block=192.168.22.128/26 handle="k8s-pod-network.f70a2db28e564e026e0ba1213fcaf1b9c066553c04166f9dfa87a99e1ff13dd4" host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:35.300876 containerd[2460]: 2026-01-22 00:45:35.283 [INFO][5370] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.131/26] handle="k8s-pod-network.f70a2db28e564e026e0ba1213fcaf1b9c066553c04166f9dfa87a99e1ff13dd4" host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:35.300876 containerd[2460]: 2026-01-22 00:45:35.283 [INFO][5370] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 00:45:35.300876 containerd[2460]: 2026-01-22 00:45:35.283 [INFO][5370] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.22.131/26] IPv6=[] ContainerID="f70a2db28e564e026e0ba1213fcaf1b9c066553c04166f9dfa87a99e1ff13dd4" HandleID="k8s-pod-network.f70a2db28e564e026e0ba1213fcaf1b9c066553c04166f9dfa87a99e1ff13dd4" Workload="ci--4515.1.0--n--d879fbfda5-k8s-coredns--668d6bf9bc--ctkzj-eth0" Jan 22 00:45:35.301039 containerd[2460]: 2026-01-22 00:45:35.284 [INFO][5358] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f70a2db28e564e026e0ba1213fcaf1b9c066553c04166f9dfa87a99e1ff13dd4" Namespace="kube-system" Pod="coredns-668d6bf9bc-ctkzj" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-coredns--668d6bf9bc--ctkzj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--d879fbfda5-k8s-coredns--668d6bf9bc--ctkzj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9f7bd783-d45a-483f-ab06-4c0a65b60ba1", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 44, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-d879fbfda5", ContainerID:"", Pod:"coredns-668d6bf9bc-ctkzj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7bc178b1472", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:45:35.301039 containerd[2460]: 2026-01-22 00:45:35.284 [INFO][5358] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.131/32] ContainerID="f70a2db28e564e026e0ba1213fcaf1b9c066553c04166f9dfa87a99e1ff13dd4" Namespace="kube-system" Pod="coredns-668d6bf9bc-ctkzj" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-coredns--668d6bf9bc--ctkzj-eth0" Jan 22 00:45:35.301039 containerd[2460]: 2026-01-22 00:45:35.284 [INFO][5358] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7bc178b1472 ContainerID="f70a2db28e564e026e0ba1213fcaf1b9c066553c04166f9dfa87a99e1ff13dd4" Namespace="kube-system" Pod="coredns-668d6bf9bc-ctkzj" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-coredns--668d6bf9bc--ctkzj-eth0" Jan 22 00:45:35.301039 containerd[2460]: 2026-01-22 00:45:35.286 [INFO][5358] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f70a2db28e564e026e0ba1213fcaf1b9c066553c04166f9dfa87a99e1ff13dd4" Namespace="kube-system" Pod="coredns-668d6bf9bc-ctkzj" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-coredns--668d6bf9bc--ctkzj-eth0" Jan 22 00:45:35.301039 containerd[2460]: 2026-01-22 00:45:35.287 [INFO][5358] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f70a2db28e564e026e0ba1213fcaf1b9c066553c04166f9dfa87a99e1ff13dd4" Namespace="kube-system" Pod="coredns-668d6bf9bc-ctkzj" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-coredns--668d6bf9bc--ctkzj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--d879fbfda5-k8s-coredns--668d6bf9bc--ctkzj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9f7bd783-d45a-483f-ab06-4c0a65b60ba1", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 44, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-d879fbfda5", ContainerID:"f70a2db28e564e026e0ba1213fcaf1b9c066553c04166f9dfa87a99e1ff13dd4", Pod:"coredns-668d6bf9bc-ctkzj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7bc178b1472", MAC:"66:34:c3:ff:2d:b3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:45:35.301039 containerd[2460]: 2026-01-22 00:45:35.296 [INFO][5358] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f70a2db28e564e026e0ba1213fcaf1b9c066553c04166f9dfa87a99e1ff13dd4" Namespace="kube-system" Pod="coredns-668d6bf9bc-ctkzj" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-coredns--668d6bf9bc--ctkzj-eth0" Jan 22 00:45:35.315847 kernel: kauditd_printk_skb: 256 callbacks suppressed Jan 22 00:45:35.315926 kernel: audit: type=1325 audit(1769042735.313:674): table=filter:129 family=2 entries=46 op=nft_register_chain pid=5385 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 00:45:35.313000 audit[5385]: NETFILTER_CFG table=filter:129 family=2 entries=46 op=nft_register_chain pid=5385 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 00:45:35.313000 audit[5385]: SYSCALL arch=c000003e syscall=46 success=yes exit=23740 a0=3 a1=7fffd9a52340 a2=0 a3=7fffd9a5232c items=0 ppid=5048 pid=5385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:35.325102 kernel: audit: type=1300 audit(1769042735.313:674): arch=c000003e syscall=46 success=yes exit=23740 a0=3 a1=7fffd9a52340 a2=0 a3=7fffd9a5232c items=0 ppid=5048 pid=5385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:35.313000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 00:45:35.330625 kernel: audit: type=1327 audit(1769042735.313:674): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 00:45:35.331336 kubelet[3924]: E0122 00:45:35.331228 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68764f557f-l8tpm" podUID="dd066ec6-7b8c-4975-b067-940020b582cf" Jan 22 00:45:35.345787 containerd[2460]: time="2026-01-22T00:45:35.345732752Z" level=info msg="connecting to shim f70a2db28e564e026e0ba1213fcaf1b9c066553c04166f9dfa87a99e1ff13dd4" address="unix:///run/containerd/s/9753b798d136e0ff8350cfa5e69662cb642b194c26be83880de7b1b066c5d8bc" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:45:35.369000 audit[5414]: NETFILTER_CFG table=filter:130 family=2 entries=20 op=nft_register_rule pid=5414 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:45:35.378825 kernel: audit: type=1325 audit(1769042735.369:675): table=filter:130 family=2 entries=20 op=nft_register_rule pid=5414 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:45:35.378885 kernel: audit: type=1300 audit(1769042735.369:675): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdc4414830 a2=0 a3=7ffdc441481c items=0 ppid=4073 pid=5414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:35.369000 audit[5414]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdc4414830 a2=0 a3=7ffdc441481c items=0 ppid=4073 pid=5414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:35.369000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:45:35.385754 kernel: audit: type=1327 audit(1769042735.369:675): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:45:35.385854 kernel: audit: type=1325 audit(1769042735.376:676): table=nat:131 family=2 entries=14 op=nft_register_rule pid=5414 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:45:35.376000 audit[5414]: NETFILTER_CFG table=nat:131 family=2 entries=14 op=nft_register_rule pid=5414 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:45:35.376000 audit[5414]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdc4414830 a2=0 a3=0 items=0 ppid=4073 pid=5414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:35.376000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:45:35.392509 kernel: audit: type=1300 audit(1769042735.376:676): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdc4414830 a2=0 a3=0 items=0 ppid=4073 pid=5414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:35.392565 kernel: audit: type=1327 audit(1769042735.376:676): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:45:35.393090 systemd[1]: Started cri-containerd-f70a2db28e564e026e0ba1213fcaf1b9c066553c04166f9dfa87a99e1ff13dd4.scope - libcontainer container f70a2db28e564e026e0ba1213fcaf1b9c066553c04166f9dfa87a99e1ff13dd4. Jan 22 00:45:35.405000 audit: BPF prog-id=239 op=LOAD Jan 22 00:45:35.406000 audit: BPF prog-id=240 op=LOAD Jan 22 00:45:35.406000 audit[5405]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=5394 pid=5405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:35.407863 kernel: audit: type=1334 audit(1769042735.405:677): prog-id=239 op=LOAD Jan 22 00:45:35.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637306132646232386535363465303236653062613132313366636166 Jan 22 00:45:35.406000 audit: BPF prog-id=240 op=UNLOAD Jan 22 00:45:35.406000 audit[5405]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5394 pid=5405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:35.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637306132646232386535363465303236653062613132313366636166 Jan 22 00:45:35.406000 audit: BPF prog-id=241 op=LOAD Jan 22 00:45:35.406000 audit[5405]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=5394 pid=5405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:35.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637306132646232386535363465303236653062613132313366636166 Jan 22 00:45:35.407000 audit: BPF prog-id=242 op=LOAD Jan 22 00:45:35.407000 audit[5405]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=5394 pid=5405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:35.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637306132646232386535363465303236653062613132313366636166 Jan 22 00:45:35.407000 audit: BPF prog-id=242 op=UNLOAD Jan 22 00:45:35.407000 audit[5405]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5394 pid=5405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:35.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637306132646232386535363465303236653062613132313366636166 Jan 22 00:45:35.407000 audit: BPF prog-id=241 op=UNLOAD Jan 22 00:45:35.407000 audit[5405]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5394 pid=5405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:35.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637306132646232386535363465303236653062613132313366636166 Jan 22 00:45:35.407000 audit: BPF prog-id=243 op=LOAD Jan 22 00:45:35.407000 audit[5405]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=5394 pid=5405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:35.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637306132646232386535363465303236653062613132313366636166 Jan 22 00:45:35.439132 containerd[2460]: time="2026-01-22T00:45:35.439007238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ctkzj,Uid:9f7bd783-d45a-483f-ab06-4c0a65b60ba1,Namespace:kube-system,Attempt:0,} returns sandbox id \"f70a2db28e564e026e0ba1213fcaf1b9c066553c04166f9dfa87a99e1ff13dd4\"" Jan 22 00:45:35.443097 containerd[2460]: time="2026-01-22T00:45:35.443052389Z" level=info msg="CreateContainer within sandbox \"f70a2db28e564e026e0ba1213fcaf1b9c066553c04166f9dfa87a99e1ff13dd4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 22 00:45:35.468980 containerd[2460]: time="2026-01-22T00:45:35.468420424Z" level=info msg="Container 38856c42f4c18dae4959a4a4ffce9ad342ddffdfdb3b18b367ef1e55f18f553e: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:45:35.472483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1259773256.mount: Deactivated successfully. Jan 22 00:45:35.481965 containerd[2460]: time="2026-01-22T00:45:35.481938874Z" level=info msg="CreateContainer within sandbox \"f70a2db28e564e026e0ba1213fcaf1b9c066553c04166f9dfa87a99e1ff13dd4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"38856c42f4c18dae4959a4a4ffce9ad342ddffdfdb3b18b367ef1e55f18f553e\"" Jan 22 00:45:35.482392 containerd[2460]: time="2026-01-22T00:45:35.482359097Z" level=info msg="StartContainer for \"38856c42f4c18dae4959a4a4ffce9ad342ddffdfdb3b18b367ef1e55f18f553e\"" Jan 22 00:45:35.483463 containerd[2460]: time="2026-01-22T00:45:35.483404551Z" level=info msg="connecting to shim 38856c42f4c18dae4959a4a4ffce9ad342ddffdfdb3b18b367ef1e55f18f553e" address="unix:///run/containerd/s/9753b798d136e0ff8350cfa5e69662cb642b194c26be83880de7b1b066c5d8bc" protocol=ttrpc version=3 Jan 22 00:45:35.502883 systemd[1]: Started cri-containerd-38856c42f4c18dae4959a4a4ffce9ad342ddffdfdb3b18b367ef1e55f18f553e.scope - libcontainer container 38856c42f4c18dae4959a4a4ffce9ad342ddffdfdb3b18b367ef1e55f18f553e. Jan 22 00:45:35.510000 audit: BPF prog-id=244 op=LOAD Jan 22 00:45:35.511000 audit: BPF prog-id=245 op=LOAD Jan 22 00:45:35.511000 audit[5431]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=5394 pid=5431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:35.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338383536633432663463313864616534393539613461346666636539 Jan 22 00:45:35.511000 audit: BPF prog-id=245 op=UNLOAD Jan 22 00:45:35.511000 audit[5431]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5394 pid=5431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:35.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338383536633432663463313864616534393539613461346666636539 Jan 22 00:45:35.511000 audit: BPF prog-id=246 op=LOAD Jan 22 00:45:35.511000 audit[5431]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=5394 pid=5431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:35.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338383536633432663463313864616534393539613461346666636539 Jan 22 00:45:35.511000 audit: BPF prog-id=247 op=LOAD Jan 22 00:45:35.511000 audit[5431]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=5394 pid=5431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:35.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338383536633432663463313864616534393539613461346666636539 Jan 22 00:45:35.511000 audit: BPF prog-id=247 op=UNLOAD Jan 22 00:45:35.511000 audit[5431]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5394 pid=5431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:35.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338383536633432663463313864616534393539613461346666636539 Jan 22 00:45:35.511000 audit: BPF prog-id=246 op=UNLOAD Jan 22 00:45:35.511000 audit[5431]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5394 pid=5431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:35.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338383536633432663463313864616534393539613461346666636539 Jan 22 00:45:35.511000 audit: BPF prog-id=248 op=LOAD Jan 22 00:45:35.511000 audit[5431]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=5394 pid=5431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:35.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338383536633432663463313864616534393539613461346666636539 Jan 22 00:45:35.533555 containerd[2460]: time="2026-01-22T00:45:35.533524805Z" level=info msg="StartContainer for \"38856c42f4c18dae4959a4a4ffce9ad342ddffdfdb3b18b367ef1e55f18f553e\" returns successfully" Jan 22 00:45:35.747967 systemd-networkd[2242]: cali26835b9dcf2: Gained IPv6LL Jan 22 00:45:36.185353 containerd[2460]: time="2026-01-22T00:45:36.185310422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-78h9h,Uid:222ac10e-a19c-48d7-ba2a-f1cdbf34cf86,Namespace:calico-system,Attempt:0,}" Jan 22 00:45:36.285240 systemd-networkd[2242]: cali46e7fbae75f: Link UP Jan 22 00:45:36.285908 systemd-networkd[2242]: cali46e7fbae75f: Gained carrier Jan 22 00:45:36.301211 containerd[2460]: 2026-01-22 00:45:36.226 [INFO][5466] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--n--d879fbfda5-k8s-csi--node--driver--78h9h-eth0 csi-node-driver- calico-system 222ac10e-a19c-48d7-ba2a-f1cdbf34cf86 686 0 2026-01-22 00:45:11 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4515.1.0-n-d879fbfda5 csi-node-driver-78h9h eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali46e7fbae75f [] [] }} ContainerID="c2d15544ce7ba58fcace5cc3f3d0fa8b6a7ad8208268de0ce46aa3cde8c0a320" Namespace="calico-system" Pod="csi-node-driver-78h9h" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-csi--node--driver--78h9h-" Jan 22 00:45:36.301211 containerd[2460]: 2026-01-22 00:45:36.226 [INFO][5466] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c2d15544ce7ba58fcace5cc3f3d0fa8b6a7ad8208268de0ce46aa3cde8c0a320" Namespace="calico-system" Pod="csi-node-driver-78h9h" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-csi--node--driver--78h9h-eth0" Jan 22 00:45:36.301211 containerd[2460]: 2026-01-22 00:45:36.248 [INFO][5477] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c2d15544ce7ba58fcace5cc3f3d0fa8b6a7ad8208268de0ce46aa3cde8c0a320" HandleID="k8s-pod-network.c2d15544ce7ba58fcace5cc3f3d0fa8b6a7ad8208268de0ce46aa3cde8c0a320" Workload="ci--4515.1.0--n--d879fbfda5-k8s-csi--node--driver--78h9h-eth0" Jan 22 00:45:36.301211 containerd[2460]: 2026-01-22 00:45:36.248 [INFO][5477] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c2d15544ce7ba58fcace5cc3f3d0fa8b6a7ad8208268de0ce46aa3cde8c0a320" HandleID="k8s-pod-network.c2d15544ce7ba58fcace5cc3f3d0fa8b6a7ad8208268de0ce46aa3cde8c0a320" Workload="ci--4515.1.0--n--d879fbfda5-k8s-csi--node--driver--78h9h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d58f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-n-d879fbfda5", "pod":"csi-node-driver-78h9h", "timestamp":"2026-01-22 00:45:36.24878663 +0000 UTC"}, Hostname:"ci-4515.1.0-n-d879fbfda5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 00:45:36.301211 containerd[2460]: 2026-01-22 00:45:36.248 [INFO][5477] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 00:45:36.301211 containerd[2460]: 2026-01-22 00:45:36.248 [INFO][5477] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 00:45:36.301211 containerd[2460]: 2026-01-22 00:45:36.249 [INFO][5477] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-n-d879fbfda5' Jan 22 00:45:36.301211 containerd[2460]: 2026-01-22 00:45:36.256 [INFO][5477] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c2d15544ce7ba58fcace5cc3f3d0fa8b6a7ad8208268de0ce46aa3cde8c0a320" host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:36.301211 containerd[2460]: 2026-01-22 00:45:36.259 [INFO][5477] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:36.301211 containerd[2460]: 2026-01-22 00:45:36.262 [INFO][5477] ipam/ipam.go 511: Trying affinity for 192.168.22.128/26 host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:36.301211 containerd[2460]: 2026-01-22 00:45:36.264 [INFO][5477] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.128/26 host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:36.301211 containerd[2460]: 2026-01-22 00:45:36.265 [INFO][5477] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.128/26 host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:36.301211 containerd[2460]: 2026-01-22 00:45:36.265 [INFO][5477] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.22.128/26 handle="k8s-pod-network.c2d15544ce7ba58fcace5cc3f3d0fa8b6a7ad8208268de0ce46aa3cde8c0a320" host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:36.301211 containerd[2460]: 2026-01-22 00:45:36.266 [INFO][5477] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c2d15544ce7ba58fcace5cc3f3d0fa8b6a7ad8208268de0ce46aa3cde8c0a320 Jan 22 00:45:36.301211 containerd[2460]: 2026-01-22 00:45:36.273 [INFO][5477] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.22.128/26 handle="k8s-pod-network.c2d15544ce7ba58fcace5cc3f3d0fa8b6a7ad8208268de0ce46aa3cde8c0a320" host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:36.301211 containerd[2460]: 2026-01-22 00:45:36.281 [INFO][5477] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.22.132/26] block=192.168.22.128/26 handle="k8s-pod-network.c2d15544ce7ba58fcace5cc3f3d0fa8b6a7ad8208268de0ce46aa3cde8c0a320" host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:36.301211 containerd[2460]: 2026-01-22 00:45:36.281 [INFO][5477] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.132/26] handle="k8s-pod-network.c2d15544ce7ba58fcace5cc3f3d0fa8b6a7ad8208268de0ce46aa3cde8c0a320" host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:36.301211 containerd[2460]: 2026-01-22 00:45:36.281 [INFO][5477] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 00:45:36.301211 containerd[2460]: 2026-01-22 00:45:36.281 [INFO][5477] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.22.132/26] IPv6=[] ContainerID="c2d15544ce7ba58fcace5cc3f3d0fa8b6a7ad8208268de0ce46aa3cde8c0a320" HandleID="k8s-pod-network.c2d15544ce7ba58fcace5cc3f3d0fa8b6a7ad8208268de0ce46aa3cde8c0a320" Workload="ci--4515.1.0--n--d879fbfda5-k8s-csi--node--driver--78h9h-eth0" Jan 22 00:45:36.301993 containerd[2460]: 2026-01-22 00:45:36.283 [INFO][5466] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c2d15544ce7ba58fcace5cc3f3d0fa8b6a7ad8208268de0ce46aa3cde8c0a320" Namespace="calico-system" Pod="csi-node-driver-78h9h" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-csi--node--driver--78h9h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--d879fbfda5-k8s-csi--node--driver--78h9h-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"222ac10e-a19c-48d7-ba2a-f1cdbf34cf86", ResourceVersion:"686", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 45, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-d879fbfda5", ContainerID:"", Pod:"csi-node-driver-78h9h", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.22.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali46e7fbae75f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:45:36.301993 containerd[2460]: 2026-01-22 00:45:36.283 [INFO][5466] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.132/32] ContainerID="c2d15544ce7ba58fcace5cc3f3d0fa8b6a7ad8208268de0ce46aa3cde8c0a320" Namespace="calico-system" Pod="csi-node-driver-78h9h" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-csi--node--driver--78h9h-eth0" Jan 22 00:45:36.301993 containerd[2460]: 2026-01-22 00:45:36.283 [INFO][5466] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali46e7fbae75f ContainerID="c2d15544ce7ba58fcace5cc3f3d0fa8b6a7ad8208268de0ce46aa3cde8c0a320" Namespace="calico-system" Pod="csi-node-driver-78h9h" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-csi--node--driver--78h9h-eth0" Jan 22 00:45:36.301993 containerd[2460]: 2026-01-22 00:45:36.286 [INFO][5466] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c2d15544ce7ba58fcace5cc3f3d0fa8b6a7ad8208268de0ce46aa3cde8c0a320" Namespace="calico-system" Pod="csi-node-driver-78h9h" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-csi--node--driver--78h9h-eth0" Jan 22 00:45:36.301993 containerd[2460]: 2026-01-22 00:45:36.286 [INFO][5466] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c2d15544ce7ba58fcace5cc3f3d0fa8b6a7ad8208268de0ce46aa3cde8c0a320" Namespace="calico-system" Pod="csi-node-driver-78h9h" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-csi--node--driver--78h9h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--d879fbfda5-k8s-csi--node--driver--78h9h-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"222ac10e-a19c-48d7-ba2a-f1cdbf34cf86", ResourceVersion:"686", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 45, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-d879fbfda5", ContainerID:"c2d15544ce7ba58fcace5cc3f3d0fa8b6a7ad8208268de0ce46aa3cde8c0a320", Pod:"csi-node-driver-78h9h", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.22.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali46e7fbae75f", MAC:"16:2c:0c:83:70:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:45:36.301993 containerd[2460]: 2026-01-22 00:45:36.299 [INFO][5466] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c2d15544ce7ba58fcace5cc3f3d0fa8b6a7ad8208268de0ce46aa3cde8c0a320" Namespace="calico-system" Pod="csi-node-driver-78h9h" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-csi--node--driver--78h9h-eth0" Jan 22 00:45:36.312000 audit[5491]: NETFILTER_CFG table=filter:132 family=2 entries=44 op=nft_register_chain pid=5491 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 00:45:36.312000 audit[5491]: SYSCALL arch=c000003e syscall=46 success=yes exit=21952 a0=3 a1=7ffec26aea40 a2=0 a3=7ffec26aea2c items=0 ppid=5048 pid=5491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:36.312000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 00:45:36.334390 kubelet[3924]: E0122 00:45:36.334348 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68764f557f-l8tpm" podUID="dd066ec6-7b8c-4975-b067-940020b582cf" Jan 22 00:45:36.358887 containerd[2460]: time="2026-01-22T00:45:36.358852026Z" level=info msg="connecting to shim c2d15544ce7ba58fcace5cc3f3d0fa8b6a7ad8208268de0ce46aa3cde8c0a320" address="unix:///run/containerd/s/6f172c17473eb74c00222d11746ee011a7e3007808dd51fbb8067be3bb236c8d" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:45:36.371302 kubelet[3924]: I0122 00:45:36.371203 3924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ctkzj" podStartSLOduration=40.371184946 podStartE2EDuration="40.371184946s" podCreationTimestamp="2026-01-22 00:44:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 00:45:36.370098036 +0000 UTC m=+45.276331733" watchObservedRunningTime="2026-01-22 00:45:36.371184946 +0000 UTC m=+45.277418638" Jan 22 00:45:36.387950 systemd[1]: Started cri-containerd-c2d15544ce7ba58fcace5cc3f3d0fa8b6a7ad8208268de0ce46aa3cde8c0a320.scope - libcontainer container c2d15544ce7ba58fcace5cc3f3d0fa8b6a7ad8208268de0ce46aa3cde8c0a320. Jan 22 00:45:36.392000 audit[5525]: NETFILTER_CFG table=filter:133 family=2 entries=20 op=nft_register_rule pid=5525 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:45:36.392000 audit[5525]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe8e0e9330 a2=0 a3=7ffe8e0e931c items=0 ppid=4073 pid=5525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:36.392000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:45:36.396000 audit[5525]: NETFILTER_CFG table=nat:134 family=2 entries=14 op=nft_register_rule pid=5525 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:45:36.396000 audit[5525]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe8e0e9330 a2=0 a3=0 items=0 ppid=4073 pid=5525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:36.396000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:45:36.398000 audit: BPF prog-id=249 op=LOAD Jan 22 00:45:36.398000 audit: BPF prog-id=250 op=LOAD Jan 22 00:45:36.398000 audit[5511]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=5500 pid=5511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:36.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332643135353434636537626135386663616365356363336633643066 Jan 22 00:45:36.398000 audit: BPF prog-id=250 op=UNLOAD Jan 22 00:45:36.398000 audit[5511]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5500 pid=5511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:36.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332643135353434636537626135386663616365356363336633643066 Jan 22 00:45:36.398000 audit: BPF prog-id=251 op=LOAD Jan 22 00:45:36.398000 audit[5511]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=5500 pid=5511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:36.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332643135353434636537626135386663616365356363336633643066 Jan 22 00:45:36.398000 audit: BPF prog-id=252 op=LOAD Jan 22 00:45:36.398000 audit[5511]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=5500 pid=5511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:36.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332643135353434636537626135386663616365356363336633643066 Jan 22 00:45:36.398000 audit: BPF prog-id=252 op=UNLOAD Jan 22 00:45:36.398000 audit[5511]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5500 pid=5511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:36.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332643135353434636537626135386663616365356363336633643066 Jan 22 00:45:36.398000 audit: BPF prog-id=251 op=UNLOAD Jan 22 00:45:36.398000 audit[5511]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5500 pid=5511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:36.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332643135353434636537626135386663616365356363336633643066 Jan 22 00:45:36.398000 audit: BPF prog-id=253 op=LOAD Jan 22 00:45:36.398000 audit[5511]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=5500 pid=5511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:36.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6332643135353434636537626135386663616365356363336633643066 Jan 22 00:45:36.417205 containerd[2460]: time="2026-01-22T00:45:36.417179839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-78h9h,Uid:222ac10e-a19c-48d7-ba2a-f1cdbf34cf86,Namespace:calico-system,Attempt:0,} returns sandbox id \"c2d15544ce7ba58fcace5cc3f3d0fa8b6a7ad8208268de0ce46aa3cde8c0a320\"" Jan 22 00:45:36.418720 containerd[2460]: time="2026-01-22T00:45:36.418697276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 22 00:45:36.660451 containerd[2460]: time="2026-01-22T00:45:36.660403230Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:45:36.663323 containerd[2460]: time="2026-01-22T00:45:36.663284016Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 22 00:45:36.663382 containerd[2460]: time="2026-01-22T00:45:36.663364918Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 22 00:45:36.663590 kubelet[3924]: E0122 00:45:36.663488 3924 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 22 00:45:36.663590 kubelet[3924]: E0122 00:45:36.663533 3924 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 22 00:45:36.663689 kubelet[3924]: E0122 00:45:36.663655 3924 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bdmdn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-78h9h_calico-system(222ac10e-a19c-48d7-ba2a-f1cdbf34cf86): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 22 00:45:36.666421 containerd[2460]: time="2026-01-22T00:45:36.666220344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 22 00:45:36.835872 systemd-networkd[2242]: cali7bc178b1472: Gained IPv6LL Jan 22 00:45:36.923883 containerd[2460]: time="2026-01-22T00:45:36.923712522Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:45:36.926439 containerd[2460]: time="2026-01-22T00:45:36.926394368Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 22 00:45:36.926519 containerd[2460]: time="2026-01-22T00:45:36.926483585Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 22 00:45:36.926674 kubelet[3924]: E0122 00:45:36.926630 3924 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 22 00:45:36.926727 kubelet[3924]: E0122 00:45:36.926675 3924 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 22 00:45:36.926860 kubelet[3924]: E0122 00:45:36.926823 3924 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bdmdn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-78h9h_calico-system(222ac10e-a19c-48d7-ba2a-f1cdbf34cf86): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 22 00:45:36.928831 kubelet[3924]: E0122 00:45:36.928795 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-78h9h" podUID="222ac10e-a19c-48d7-ba2a-f1cdbf34cf86" Jan 22 00:45:37.185421 containerd[2460]: time="2026-01-22T00:45:37.185134883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5555c47f4f-szgpn,Uid:1c96102f-5e79-4a6e-9dde-550f505c5961,Namespace:calico-system,Attempt:0,}" Jan 22 00:45:37.272815 systemd-networkd[2242]: cali7156e5bc67e: Link UP Jan 22 00:45:37.273455 systemd-networkd[2242]: cali7156e5bc67e: Gained carrier Jan 22 00:45:37.289032 containerd[2460]: 2026-01-22 00:45:37.222 [INFO][5540] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--n--d879fbfda5-k8s-calico--kube--controllers--5555c47f4f--szgpn-eth0 calico-kube-controllers-5555c47f4f- calico-system 1c96102f-5e79-4a6e-9dde-550f505c5961 801 0 2026-01-22 00:45:11 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5555c47f4f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4515.1.0-n-d879fbfda5 calico-kube-controllers-5555c47f4f-szgpn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7156e5bc67e [] [] }} ContainerID="2c65915c7fa07adf623459b05aa91230ddfff6135fcd79e4ded045ee59efe620" Namespace="calico-system" Pod="calico-kube-controllers-5555c47f4f-szgpn" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-calico--kube--controllers--5555c47f4f--szgpn-" Jan 22 00:45:37.289032 containerd[2460]: 2026-01-22 00:45:37.222 [INFO][5540] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2c65915c7fa07adf623459b05aa91230ddfff6135fcd79e4ded045ee59efe620" Namespace="calico-system" Pod="calico-kube-controllers-5555c47f4f-szgpn" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-calico--kube--controllers--5555c47f4f--szgpn-eth0" Jan 22 00:45:37.289032 containerd[2460]: 2026-01-22 00:45:37.241 [INFO][5551] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2c65915c7fa07adf623459b05aa91230ddfff6135fcd79e4ded045ee59efe620" HandleID="k8s-pod-network.2c65915c7fa07adf623459b05aa91230ddfff6135fcd79e4ded045ee59efe620" Workload="ci--4515.1.0--n--d879fbfda5-k8s-calico--kube--controllers--5555c47f4f--szgpn-eth0" Jan 22 00:45:37.289032 containerd[2460]: 2026-01-22 00:45:37.241 [INFO][5551] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2c65915c7fa07adf623459b05aa91230ddfff6135fcd79e4ded045ee59efe620" HandleID="k8s-pod-network.2c65915c7fa07adf623459b05aa91230ddfff6135fcd79e4ded045ee59efe620" Workload="ci--4515.1.0--n--d879fbfda5-k8s-calico--kube--controllers--5555c47f4f--szgpn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002bd5a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-n-d879fbfda5", "pod":"calico-kube-controllers-5555c47f4f-szgpn", "timestamp":"2026-01-22 00:45:37.241162477 +0000 UTC"}, Hostname:"ci-4515.1.0-n-d879fbfda5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 00:45:37.289032 containerd[2460]: 2026-01-22 00:45:37.241 [INFO][5551] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 00:45:37.289032 containerd[2460]: 2026-01-22 00:45:37.241 [INFO][5551] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 00:45:37.289032 containerd[2460]: 2026-01-22 00:45:37.241 [INFO][5551] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-n-d879fbfda5' Jan 22 00:45:37.289032 containerd[2460]: 2026-01-22 00:45:37.245 [INFO][5551] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2c65915c7fa07adf623459b05aa91230ddfff6135fcd79e4ded045ee59efe620" host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:37.289032 containerd[2460]: 2026-01-22 00:45:37.249 [INFO][5551] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:37.289032 containerd[2460]: 2026-01-22 00:45:37.252 [INFO][5551] ipam/ipam.go 511: Trying affinity for 192.168.22.128/26 host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:37.289032 containerd[2460]: 2026-01-22 00:45:37.253 [INFO][5551] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.128/26 host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:37.289032 containerd[2460]: 2026-01-22 00:45:37.255 [INFO][5551] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.128/26 host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:37.289032 containerd[2460]: 2026-01-22 00:45:37.256 [INFO][5551] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.22.128/26 handle="k8s-pod-network.2c65915c7fa07adf623459b05aa91230ddfff6135fcd79e4ded045ee59efe620" host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:37.289032 containerd[2460]: 2026-01-22 00:45:37.257 [INFO][5551] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2c65915c7fa07adf623459b05aa91230ddfff6135fcd79e4ded045ee59efe620 Jan 22 00:45:37.289032 containerd[2460]: 2026-01-22 00:45:37.264 [INFO][5551] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.22.128/26 handle="k8s-pod-network.2c65915c7fa07adf623459b05aa91230ddfff6135fcd79e4ded045ee59efe620" host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:37.289032 containerd[2460]: 2026-01-22 00:45:37.269 [INFO][5551] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.22.133/26] block=192.168.22.128/26 handle="k8s-pod-network.2c65915c7fa07adf623459b05aa91230ddfff6135fcd79e4ded045ee59efe620" host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:37.289032 containerd[2460]: 2026-01-22 00:45:37.269 [INFO][5551] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.133/26] handle="k8s-pod-network.2c65915c7fa07adf623459b05aa91230ddfff6135fcd79e4ded045ee59efe620" host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:37.289032 containerd[2460]: 2026-01-22 00:45:37.269 [INFO][5551] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 00:45:37.289032 containerd[2460]: 2026-01-22 00:45:37.269 [INFO][5551] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.22.133/26] IPv6=[] ContainerID="2c65915c7fa07adf623459b05aa91230ddfff6135fcd79e4ded045ee59efe620" HandleID="k8s-pod-network.2c65915c7fa07adf623459b05aa91230ddfff6135fcd79e4ded045ee59efe620" Workload="ci--4515.1.0--n--d879fbfda5-k8s-calico--kube--controllers--5555c47f4f--szgpn-eth0" Jan 22 00:45:37.290663 containerd[2460]: 2026-01-22 00:45:37.270 [INFO][5540] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2c65915c7fa07adf623459b05aa91230ddfff6135fcd79e4ded045ee59efe620" Namespace="calico-system" Pod="calico-kube-controllers-5555c47f4f-szgpn" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-calico--kube--controllers--5555c47f4f--szgpn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--d879fbfda5-k8s-calico--kube--controllers--5555c47f4f--szgpn-eth0", GenerateName:"calico-kube-controllers-5555c47f4f-", Namespace:"calico-system", SelfLink:"", UID:"1c96102f-5e79-4a6e-9dde-550f505c5961", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 45, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5555c47f4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-d879fbfda5", ContainerID:"", Pod:"calico-kube-controllers-5555c47f4f-szgpn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.22.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7156e5bc67e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:45:37.290663 containerd[2460]: 2026-01-22 00:45:37.271 [INFO][5540] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.133/32] ContainerID="2c65915c7fa07adf623459b05aa91230ddfff6135fcd79e4ded045ee59efe620" Namespace="calico-system" Pod="calico-kube-controllers-5555c47f4f-szgpn" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-calico--kube--controllers--5555c47f4f--szgpn-eth0" Jan 22 00:45:37.290663 containerd[2460]: 2026-01-22 00:45:37.271 [INFO][5540] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7156e5bc67e ContainerID="2c65915c7fa07adf623459b05aa91230ddfff6135fcd79e4ded045ee59efe620" Namespace="calico-system" Pod="calico-kube-controllers-5555c47f4f-szgpn" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-calico--kube--controllers--5555c47f4f--szgpn-eth0" Jan 22 00:45:37.290663 containerd[2460]: 2026-01-22 00:45:37.273 [INFO][5540] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2c65915c7fa07adf623459b05aa91230ddfff6135fcd79e4ded045ee59efe620" Namespace="calico-system" Pod="calico-kube-controllers-5555c47f4f-szgpn" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-calico--kube--controllers--5555c47f4f--szgpn-eth0" Jan 22 00:45:37.290663 containerd[2460]: 2026-01-22 00:45:37.273 [INFO][5540] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2c65915c7fa07adf623459b05aa91230ddfff6135fcd79e4ded045ee59efe620" Namespace="calico-system" Pod="calico-kube-controllers-5555c47f4f-szgpn" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-calico--kube--controllers--5555c47f4f--szgpn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--d879fbfda5-k8s-calico--kube--controllers--5555c47f4f--szgpn-eth0", GenerateName:"calico-kube-controllers-5555c47f4f-", Namespace:"calico-system", SelfLink:"", UID:"1c96102f-5e79-4a6e-9dde-550f505c5961", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 45, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5555c47f4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-d879fbfda5", ContainerID:"2c65915c7fa07adf623459b05aa91230ddfff6135fcd79e4ded045ee59efe620", Pod:"calico-kube-controllers-5555c47f4f-szgpn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.22.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7156e5bc67e", MAC:"a2:b9:13:4b:fb:0f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:45:37.290663 containerd[2460]: 2026-01-22 00:45:37.285 [INFO][5540] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2c65915c7fa07adf623459b05aa91230ddfff6135fcd79e4ded045ee59efe620" Namespace="calico-system" Pod="calico-kube-controllers-5555c47f4f-szgpn" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-calico--kube--controllers--5555c47f4f--szgpn-eth0" Jan 22 00:45:37.300000 audit[5564]: NETFILTER_CFG table=filter:135 family=2 entries=54 op=nft_register_chain pid=5564 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 00:45:37.300000 audit[5564]: SYSCALL arch=c000003e syscall=46 success=yes exit=25992 a0=3 a1=7ffc7d08a500 a2=0 a3=7ffc7d08a4ec items=0 ppid=5048 pid=5564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:37.300000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 00:45:37.330084 containerd[2460]: time="2026-01-22T00:45:37.330047935Z" level=info msg="connecting to shim 2c65915c7fa07adf623459b05aa91230ddfff6135fcd79e4ded045ee59efe620" address="unix:///run/containerd/s/ec103c7e4e258279efc321f6796f07adc5346ed153ac0c23260ad86acd091814" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:45:37.346397 kubelet[3924]: E0122 00:45:37.346175 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-78h9h" podUID="222ac10e-a19c-48d7-ba2a-f1cdbf34cf86" Jan 22 00:45:37.361130 systemd[1]: Started cri-containerd-2c65915c7fa07adf623459b05aa91230ddfff6135fcd79e4ded045ee59efe620.scope - libcontainer container 2c65915c7fa07adf623459b05aa91230ddfff6135fcd79e4ded045ee59efe620. Jan 22 00:45:37.382000 audit: BPF prog-id=254 op=LOAD Jan 22 00:45:37.383000 audit: BPF prog-id=255 op=LOAD Jan 22 00:45:37.383000 audit[5585]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=5574 pid=5585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:37.383000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263363539313563376661303761646636323334353962303561613931 Jan 22 00:45:37.383000 audit: BPF prog-id=255 op=UNLOAD Jan 22 00:45:37.383000 audit[5585]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5574 pid=5585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:37.383000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263363539313563376661303761646636323334353962303561613931 Jan 22 00:45:37.383000 audit: BPF prog-id=256 op=LOAD Jan 22 00:45:37.383000 audit[5585]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=5574 pid=5585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:37.383000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263363539313563376661303761646636323334353962303561613931 Jan 22 00:45:37.383000 audit: BPF prog-id=257 op=LOAD Jan 22 00:45:37.383000 audit[5585]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=5574 pid=5585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:37.383000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263363539313563376661303761646636323334353962303561613931 Jan 22 00:45:37.384000 audit: BPF prog-id=257 op=UNLOAD Jan 22 00:45:37.384000 audit[5585]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5574 pid=5585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:37.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263363539313563376661303761646636323334353962303561613931 Jan 22 00:45:37.384000 audit: BPF prog-id=256 op=UNLOAD Jan 22 00:45:37.384000 audit[5585]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5574 pid=5585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:37.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263363539313563376661303761646636323334353962303561613931 Jan 22 00:45:37.384000 audit: BPF prog-id=258 op=LOAD Jan 22 00:45:37.384000 audit[5585]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=5574 pid=5585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:37.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263363539313563376661303761646636323334353962303561613931 Jan 22 00:45:37.396000 audit[5605]: NETFILTER_CFG table=filter:136 family=2 entries=17 op=nft_register_rule pid=5605 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:45:37.396000 audit[5605]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff7a856570 a2=0 a3=7fff7a85655c items=0 ppid=4073 pid=5605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:37.396000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:45:37.403000 audit[5605]: NETFILTER_CFG table=nat:137 family=2 entries=35 op=nft_register_chain pid=5605 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:45:37.403000 audit[5605]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fff7a856570 a2=0 a3=7fff7a85655c items=0 ppid=4073 pid=5605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:37.403000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:45:37.428650 containerd[2460]: time="2026-01-22T00:45:37.428596578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5555c47f4f-szgpn,Uid:1c96102f-5e79-4a6e-9dde-550f505c5961,Namespace:calico-system,Attempt:0,} returns sandbox id \"2c65915c7fa07adf623459b05aa91230ddfff6135fcd79e4ded045ee59efe620\"" Jan 22 00:45:37.430060 containerd[2460]: time="2026-01-22T00:45:37.430015554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 22 00:45:37.540635 systemd-networkd[2242]: cali46e7fbae75f: Gained IPv6LL Jan 22 00:45:37.678459 containerd[2460]: time="2026-01-22T00:45:37.678402398Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:45:37.681486 containerd[2460]: time="2026-01-22T00:45:37.681434940Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 22 00:45:37.681676 containerd[2460]: time="2026-01-22T00:45:37.681470165Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 22 00:45:37.682764 kubelet[3924]: E0122 00:45:37.682648 3924 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 22 00:45:37.682764 kubelet[3924]: E0122 00:45:37.682702 3924 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 22 00:45:37.682933 kubelet[3924]: E0122 00:45:37.682861 3924 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7h22j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5555c47f4f-szgpn_calico-system(1c96102f-5e79-4a6e-9dde-550f505c5961): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 22 00:45:37.684279 kubelet[3924]: E0122 00:45:37.684247 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5555c47f4f-szgpn" podUID="1c96102f-5e79-4a6e-9dde-550f505c5961" Jan 22 00:45:38.185366 containerd[2460]: time="2026-01-22T00:45:38.185302557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68764f557f-r6nmq,Uid:77488da9-2016-4a7d-b29a-dee9cf79fa65,Namespace:calico-apiserver,Attempt:0,}" Jan 22 00:45:38.185792 containerd[2460]: time="2026-01-22T00:45:38.185459078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7ms67,Uid:0681ac92-d1cc-4472-a4c1-25459fbeba8f,Namespace:kube-system,Attempt:0,}" Jan 22 00:45:38.185792 containerd[2460]: time="2026-01-22T00:45:38.185622529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-pmkws,Uid:60deceea-d34a-4dce-b9f3-936e34e45689,Namespace:calico-system,Attempt:0,}" Jan 22 00:45:38.345845 kubelet[3924]: E0122 00:45:38.345801 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5555c47f4f-szgpn" podUID="1c96102f-5e79-4a6e-9dde-550f505c5961" Jan 22 00:45:38.346301 kubelet[3924]: E0122 00:45:38.346052 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-78h9h" podUID="222ac10e-a19c-48d7-ba2a-f1cdbf34cf86" Jan 22 00:45:38.359346 systemd-networkd[2242]: cali7d5b966f855: Link UP Jan 22 00:45:38.361144 systemd-networkd[2242]: cali7d5b966f855: Gained carrier Jan 22 00:45:38.379224 containerd[2460]: 2026-01-22 00:45:38.275 [INFO][5619] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--n--d879fbfda5-k8s-coredns--668d6bf9bc--7ms67-eth0 coredns-668d6bf9bc- kube-system 0681ac92-d1cc-4472-a4c1-25459fbeba8f 796 0 2026-01-22 00:44:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515.1.0-n-d879fbfda5 coredns-668d6bf9bc-7ms67 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7d5b966f855 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="677e44f0158b2ca035b03243736cf1f469069ca72a60ff31d475d748cbb186ba" Namespace="kube-system" Pod="coredns-668d6bf9bc-7ms67" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-coredns--668d6bf9bc--7ms67-" Jan 22 00:45:38.379224 containerd[2460]: 2026-01-22 00:45:38.275 [INFO][5619] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="677e44f0158b2ca035b03243736cf1f469069ca72a60ff31d475d748cbb186ba" Namespace="kube-system" Pod="coredns-668d6bf9bc-7ms67" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-coredns--668d6bf9bc--7ms67-eth0" Jan 22 00:45:38.379224 containerd[2460]: 2026-01-22 00:45:38.315 [INFO][5659] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="677e44f0158b2ca035b03243736cf1f469069ca72a60ff31d475d748cbb186ba" HandleID="k8s-pod-network.677e44f0158b2ca035b03243736cf1f469069ca72a60ff31d475d748cbb186ba" Workload="ci--4515.1.0--n--d879fbfda5-k8s-coredns--668d6bf9bc--7ms67-eth0" Jan 22 00:45:38.379224 containerd[2460]: 2026-01-22 00:45:38.315 [INFO][5659] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="677e44f0158b2ca035b03243736cf1f469069ca72a60ff31d475d748cbb186ba" HandleID="k8s-pod-network.677e44f0158b2ca035b03243736cf1f469069ca72a60ff31d475d748cbb186ba" Workload="ci--4515.1.0--n--d879fbfda5-k8s-coredns--668d6bf9bc--7ms67-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00048e1a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515.1.0-n-d879fbfda5", "pod":"coredns-668d6bf9bc-7ms67", "timestamp":"2026-01-22 00:45:38.315093852 +0000 UTC"}, Hostname:"ci-4515.1.0-n-d879fbfda5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 00:45:38.379224 containerd[2460]: 2026-01-22 00:45:38.315 [INFO][5659] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 00:45:38.379224 containerd[2460]: 2026-01-22 00:45:38.315 [INFO][5659] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 00:45:38.379224 containerd[2460]: 2026-01-22 00:45:38.315 [INFO][5659] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-n-d879fbfda5' Jan 22 00:45:38.379224 containerd[2460]: 2026-01-22 00:45:38.322 [INFO][5659] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.677e44f0158b2ca035b03243736cf1f469069ca72a60ff31d475d748cbb186ba" host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:38.379224 containerd[2460]: 2026-01-22 00:45:38.328 [INFO][5659] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:38.379224 containerd[2460]: 2026-01-22 00:45:38.332 [INFO][5659] ipam/ipam.go 511: Trying affinity for 192.168.22.128/26 host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:38.379224 containerd[2460]: 2026-01-22 00:45:38.333 [INFO][5659] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.128/26 host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:38.379224 containerd[2460]: 2026-01-22 00:45:38.335 [INFO][5659] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.128/26 host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:38.379224 containerd[2460]: 2026-01-22 00:45:38.335 [INFO][5659] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.22.128/26 handle="k8s-pod-network.677e44f0158b2ca035b03243736cf1f469069ca72a60ff31d475d748cbb186ba" host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:38.379224 containerd[2460]: 2026-01-22 00:45:38.336 [INFO][5659] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.677e44f0158b2ca035b03243736cf1f469069ca72a60ff31d475d748cbb186ba Jan 22 00:45:38.379224 containerd[2460]: 2026-01-22 00:45:38.340 [INFO][5659] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.22.128/26 handle="k8s-pod-network.677e44f0158b2ca035b03243736cf1f469069ca72a60ff31d475d748cbb186ba" host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:38.379224 containerd[2460]: 2026-01-22 00:45:38.348 [INFO][5659] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.22.134/26] block=192.168.22.128/26 handle="k8s-pod-network.677e44f0158b2ca035b03243736cf1f469069ca72a60ff31d475d748cbb186ba" host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:38.379224 containerd[2460]: 2026-01-22 00:45:38.349 [INFO][5659] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.134/26] handle="k8s-pod-network.677e44f0158b2ca035b03243736cf1f469069ca72a60ff31d475d748cbb186ba" host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:38.379224 containerd[2460]: 2026-01-22 00:45:38.349 [INFO][5659] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 00:45:38.379224 containerd[2460]: 2026-01-22 00:45:38.349 [INFO][5659] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.22.134/26] IPv6=[] ContainerID="677e44f0158b2ca035b03243736cf1f469069ca72a60ff31d475d748cbb186ba" HandleID="k8s-pod-network.677e44f0158b2ca035b03243736cf1f469069ca72a60ff31d475d748cbb186ba" Workload="ci--4515.1.0--n--d879fbfda5-k8s-coredns--668d6bf9bc--7ms67-eth0" Jan 22 00:45:38.381161 containerd[2460]: 2026-01-22 00:45:38.353 [INFO][5619] cni-plugin/k8s.go 418: Populated endpoint ContainerID="677e44f0158b2ca035b03243736cf1f469069ca72a60ff31d475d748cbb186ba" Namespace="kube-system" Pod="coredns-668d6bf9bc-7ms67" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-coredns--668d6bf9bc--7ms67-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--d879fbfda5-k8s-coredns--668d6bf9bc--7ms67-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0681ac92-d1cc-4472-a4c1-25459fbeba8f", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 44, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-d879fbfda5", ContainerID:"", Pod:"coredns-668d6bf9bc-7ms67", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7d5b966f855", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:45:38.381161 containerd[2460]: 2026-01-22 00:45:38.353 [INFO][5619] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.134/32] ContainerID="677e44f0158b2ca035b03243736cf1f469069ca72a60ff31d475d748cbb186ba" Namespace="kube-system" Pod="coredns-668d6bf9bc-7ms67" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-coredns--668d6bf9bc--7ms67-eth0" Jan 22 00:45:38.381161 containerd[2460]: 2026-01-22 00:45:38.353 [INFO][5619] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7d5b966f855 ContainerID="677e44f0158b2ca035b03243736cf1f469069ca72a60ff31d475d748cbb186ba" Namespace="kube-system" Pod="coredns-668d6bf9bc-7ms67" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-coredns--668d6bf9bc--7ms67-eth0" Jan 22 00:45:38.381161 containerd[2460]: 2026-01-22 00:45:38.359 [INFO][5619] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="677e44f0158b2ca035b03243736cf1f469069ca72a60ff31d475d748cbb186ba" Namespace="kube-system" Pod="coredns-668d6bf9bc-7ms67" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-coredns--668d6bf9bc--7ms67-eth0" Jan 22 00:45:38.381161 containerd[2460]: 2026-01-22 00:45:38.359 [INFO][5619] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="677e44f0158b2ca035b03243736cf1f469069ca72a60ff31d475d748cbb186ba" Namespace="kube-system" Pod="coredns-668d6bf9bc-7ms67" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-coredns--668d6bf9bc--7ms67-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--d879fbfda5-k8s-coredns--668d6bf9bc--7ms67-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0681ac92-d1cc-4472-a4c1-25459fbeba8f", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 44, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-d879fbfda5", ContainerID:"677e44f0158b2ca035b03243736cf1f469069ca72a60ff31d475d748cbb186ba", Pod:"coredns-668d6bf9bc-7ms67", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7d5b966f855", MAC:"62:f1:10:4c:16:05", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:45:38.381161 containerd[2460]: 2026-01-22 00:45:38.375 [INFO][5619] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="677e44f0158b2ca035b03243736cf1f469069ca72a60ff31d475d748cbb186ba" Namespace="kube-system" Pod="coredns-668d6bf9bc-7ms67" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-coredns--668d6bf9bc--7ms67-eth0" Jan 22 00:45:38.399000 audit[5689]: NETFILTER_CFG table=filter:138 family=2 entries=36 op=nft_register_chain pid=5689 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 00:45:38.399000 audit[5689]: SYSCALL arch=c000003e syscall=46 success=yes exit=19176 a0=3 a1=7ffc67c76170 a2=0 a3=7ffc67c7615c items=0 ppid=5048 pid=5689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:38.399000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 00:45:38.426718 containerd[2460]: time="2026-01-22T00:45:38.426168324Z" level=info msg="connecting to shim 677e44f0158b2ca035b03243736cf1f469069ca72a60ff31d475d748cbb186ba" address="unix:///run/containerd/s/d968d9c95ed5d83c54534774f6557f74ca5e29c4da2b9525959534e977447a84" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:45:38.458923 systemd[1]: Started cri-containerd-677e44f0158b2ca035b03243736cf1f469069ca72a60ff31d475d748cbb186ba.scope - libcontainer container 677e44f0158b2ca035b03243736cf1f469069ca72a60ff31d475d748cbb186ba. Jan 22 00:45:38.470893 systemd-networkd[2242]: cali93ae976d145: Link UP Jan 22 00:45:38.471731 systemd-networkd[2242]: cali93ae976d145: Gained carrier Jan 22 00:45:38.483000 audit: BPF prog-id=259 op=LOAD Jan 22 00:45:38.483000 audit: BPF prog-id=260 op=LOAD Jan 22 00:45:38.483000 audit[5710]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=5699 pid=5710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:38.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637376534346630313538623263613033356230333234333733366366 Jan 22 00:45:38.483000 audit: BPF prog-id=260 op=UNLOAD Jan 22 00:45:38.483000 audit[5710]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5699 pid=5710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:38.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637376534346630313538623263613033356230333234333733366366 Jan 22 00:45:38.483000 audit: BPF prog-id=261 op=LOAD Jan 22 00:45:38.483000 audit[5710]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=5699 pid=5710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:38.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637376534346630313538623263613033356230333234333733366366 Jan 22 00:45:38.483000 audit: BPF prog-id=262 op=LOAD Jan 22 00:45:38.483000 audit[5710]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=5699 pid=5710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:38.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637376534346630313538623263613033356230333234333733366366 Jan 22 00:45:38.483000 audit: BPF prog-id=262 op=UNLOAD Jan 22 00:45:38.483000 audit[5710]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5699 pid=5710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:38.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637376534346630313538623263613033356230333234333733366366 Jan 22 00:45:38.484000 audit: BPF prog-id=261 op=UNLOAD Jan 22 00:45:38.484000 audit[5710]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5699 pid=5710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:38.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637376534346630313538623263613033356230333234333733366366 Jan 22 00:45:38.484000 audit: BPF prog-id=263 op=LOAD Jan 22 00:45:38.484000 audit[5710]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=5699 pid=5710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:38.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637376534346630313538623263613033356230333234333733366366 Jan 22 00:45:38.493065 containerd[2460]: 2026-01-22 00:45:38.279 [INFO][5623] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--n--d879fbfda5-k8s-calico--apiserver--68764f557f--r6nmq-eth0 calico-apiserver-68764f557f- calico-apiserver 77488da9-2016-4a7d-b29a-dee9cf79fa65 807 0 2026-01-22 00:45:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68764f557f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515.1.0-n-d879fbfda5 calico-apiserver-68764f557f-r6nmq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali93ae976d145 [] [] }} ContainerID="5dd25d35e1295a6bac318c6677b8ea1340017c26cd0f7860b202b68b79131a31" Namespace="calico-apiserver" Pod="calico-apiserver-68764f557f-r6nmq" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-calico--apiserver--68764f557f--r6nmq-" Jan 22 00:45:38.493065 containerd[2460]: 2026-01-22 00:45:38.281 [INFO][5623] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5dd25d35e1295a6bac318c6677b8ea1340017c26cd0f7860b202b68b79131a31" Namespace="calico-apiserver" Pod="calico-apiserver-68764f557f-r6nmq" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-calico--apiserver--68764f557f--r6nmq-eth0" Jan 22 00:45:38.493065 containerd[2460]: 2026-01-22 00:45:38.327 [INFO][5661] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5dd25d35e1295a6bac318c6677b8ea1340017c26cd0f7860b202b68b79131a31" HandleID="k8s-pod-network.5dd25d35e1295a6bac318c6677b8ea1340017c26cd0f7860b202b68b79131a31" Workload="ci--4515.1.0--n--d879fbfda5-k8s-calico--apiserver--68764f557f--r6nmq-eth0" Jan 22 00:45:38.493065 containerd[2460]: 2026-01-22 00:45:38.327 [INFO][5661] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5dd25d35e1295a6bac318c6677b8ea1340017c26cd0f7860b202b68b79131a31" HandleID="k8s-pod-network.5dd25d35e1295a6bac318c6677b8ea1340017c26cd0f7860b202b68b79131a31" Workload="ci--4515.1.0--n--d879fbfda5-k8s-calico--apiserver--68764f557f--r6nmq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f860), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515.1.0-n-d879fbfda5", "pod":"calico-apiserver-68764f557f-r6nmq", "timestamp":"2026-01-22 00:45:38.32772131 +0000 UTC"}, Hostname:"ci-4515.1.0-n-d879fbfda5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 00:45:38.493065 containerd[2460]: 2026-01-22 00:45:38.328 [INFO][5661] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 00:45:38.493065 containerd[2460]: 2026-01-22 00:45:38.349 [INFO][5661] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 00:45:38.493065 containerd[2460]: 2026-01-22 00:45:38.349 [INFO][5661] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-n-d879fbfda5' Jan 22 00:45:38.493065 containerd[2460]: 2026-01-22 00:45:38.422 [INFO][5661] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5dd25d35e1295a6bac318c6677b8ea1340017c26cd0f7860b202b68b79131a31" host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:38.493065 containerd[2460]: 2026-01-22 00:45:38.429 [INFO][5661] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:38.493065 containerd[2460]: 2026-01-22 00:45:38.433 [INFO][5661] ipam/ipam.go 511: Trying affinity for 192.168.22.128/26 host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:38.493065 containerd[2460]: 2026-01-22 00:45:38.436 [INFO][5661] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.128/26 host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:38.493065 containerd[2460]: 2026-01-22 00:45:38.442 [INFO][5661] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.128/26 host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:38.493065 containerd[2460]: 2026-01-22 00:45:38.442 [INFO][5661] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.22.128/26 handle="k8s-pod-network.5dd25d35e1295a6bac318c6677b8ea1340017c26cd0f7860b202b68b79131a31" host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:38.493065 containerd[2460]: 2026-01-22 00:45:38.443 [INFO][5661] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5dd25d35e1295a6bac318c6677b8ea1340017c26cd0f7860b202b68b79131a31 Jan 22 00:45:38.493065 containerd[2460]: 2026-01-22 00:45:38.448 [INFO][5661] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.22.128/26 handle="k8s-pod-network.5dd25d35e1295a6bac318c6677b8ea1340017c26cd0f7860b202b68b79131a31" host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:38.493065 containerd[2460]: 2026-01-22 00:45:38.461 [INFO][5661] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.22.135/26] block=192.168.22.128/26 handle="k8s-pod-network.5dd25d35e1295a6bac318c6677b8ea1340017c26cd0f7860b202b68b79131a31" host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:38.493065 containerd[2460]: 2026-01-22 00:45:38.462 [INFO][5661] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.135/26] handle="k8s-pod-network.5dd25d35e1295a6bac318c6677b8ea1340017c26cd0f7860b202b68b79131a31" host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:38.493065 containerd[2460]: 2026-01-22 00:45:38.462 [INFO][5661] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 00:45:38.493065 containerd[2460]: 2026-01-22 00:45:38.462 [INFO][5661] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.22.135/26] IPv6=[] ContainerID="5dd25d35e1295a6bac318c6677b8ea1340017c26cd0f7860b202b68b79131a31" HandleID="k8s-pod-network.5dd25d35e1295a6bac318c6677b8ea1340017c26cd0f7860b202b68b79131a31" Workload="ci--4515.1.0--n--d879fbfda5-k8s-calico--apiserver--68764f557f--r6nmq-eth0" Jan 22 00:45:38.493963 containerd[2460]: 2026-01-22 00:45:38.467 [INFO][5623] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5dd25d35e1295a6bac318c6677b8ea1340017c26cd0f7860b202b68b79131a31" Namespace="calico-apiserver" Pod="calico-apiserver-68764f557f-r6nmq" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-calico--apiserver--68764f557f--r6nmq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--d879fbfda5-k8s-calico--apiserver--68764f557f--r6nmq-eth0", GenerateName:"calico-apiserver-68764f557f-", Namespace:"calico-apiserver", SelfLink:"", UID:"77488da9-2016-4a7d-b29a-dee9cf79fa65", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 45, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68764f557f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-d879fbfda5", ContainerID:"", Pod:"calico-apiserver-68764f557f-r6nmq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali93ae976d145", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:45:38.493963 containerd[2460]: 2026-01-22 00:45:38.467 [INFO][5623] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.135/32] ContainerID="5dd25d35e1295a6bac318c6677b8ea1340017c26cd0f7860b202b68b79131a31" Namespace="calico-apiserver" Pod="calico-apiserver-68764f557f-r6nmq" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-calico--apiserver--68764f557f--r6nmq-eth0" Jan 22 00:45:38.493963 containerd[2460]: 2026-01-22 00:45:38.467 [INFO][5623] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali93ae976d145 ContainerID="5dd25d35e1295a6bac318c6677b8ea1340017c26cd0f7860b202b68b79131a31" Namespace="calico-apiserver" Pod="calico-apiserver-68764f557f-r6nmq" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-calico--apiserver--68764f557f--r6nmq-eth0" Jan 22 00:45:38.493963 containerd[2460]: 2026-01-22 00:45:38.474 [INFO][5623] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5dd25d35e1295a6bac318c6677b8ea1340017c26cd0f7860b202b68b79131a31" Namespace="calico-apiserver" Pod="calico-apiserver-68764f557f-r6nmq" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-calico--apiserver--68764f557f--r6nmq-eth0" Jan 22 00:45:38.493963 containerd[2460]: 2026-01-22 00:45:38.475 [INFO][5623] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5dd25d35e1295a6bac318c6677b8ea1340017c26cd0f7860b202b68b79131a31" Namespace="calico-apiserver" Pod="calico-apiserver-68764f557f-r6nmq" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-calico--apiserver--68764f557f--r6nmq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--d879fbfda5-k8s-calico--apiserver--68764f557f--r6nmq-eth0", GenerateName:"calico-apiserver-68764f557f-", Namespace:"calico-apiserver", SelfLink:"", UID:"77488da9-2016-4a7d-b29a-dee9cf79fa65", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 45, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68764f557f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-d879fbfda5", ContainerID:"5dd25d35e1295a6bac318c6677b8ea1340017c26cd0f7860b202b68b79131a31", Pod:"calico-apiserver-68764f557f-r6nmq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali93ae976d145", MAC:"5e:d1:ee:2d:7c:0d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:45:38.493963 containerd[2460]: 2026-01-22 00:45:38.490 [INFO][5623] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5dd25d35e1295a6bac318c6677b8ea1340017c26cd0f7860b202b68b79131a31" Namespace="calico-apiserver" Pod="calico-apiserver-68764f557f-r6nmq" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-calico--apiserver--68764f557f--r6nmq-eth0" Jan 22 00:45:38.519000 audit[5741]: NETFILTER_CFG table=filter:139 family=2 entries=49 op=nft_register_chain pid=5741 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 00:45:38.519000 audit[5741]: SYSCALL arch=c000003e syscall=46 success=yes exit=25436 a0=3 a1=7ffea5ea9300 a2=0 a3=7ffea5ea92ec items=0 ppid=5048 pid=5741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:38.519000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 00:45:38.529698 containerd[2460]: time="2026-01-22T00:45:38.529637769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7ms67,Uid:0681ac92-d1cc-4472-a4c1-25459fbeba8f,Namespace:kube-system,Attempt:0,} returns sandbox id \"677e44f0158b2ca035b03243736cf1f469069ca72a60ff31d475d748cbb186ba\"" Jan 22 00:45:38.544086 containerd[2460]: time="2026-01-22T00:45:38.544060078Z" level=info msg="CreateContainer within sandbox \"677e44f0158b2ca035b03243736cf1f469069ca72a60ff31d475d748cbb186ba\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 22 00:45:38.563876 systemd-networkd[2242]: calif3ee68d8ed2: Link UP Jan 22 00:45:38.564926 systemd-networkd[2242]: calif3ee68d8ed2: Gained carrier Jan 22 00:45:38.569400 containerd[2460]: time="2026-01-22T00:45:38.569368307Z" level=info msg="connecting to shim 5dd25d35e1295a6bac318c6677b8ea1340017c26cd0f7860b202b68b79131a31" address="unix:///run/containerd/s/a81a6a8bc7aaf523b4a07ac0f26ec56bc9ec0dd48c0db71808ac656ef8ae9e46" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:45:38.584860 containerd[2460]: time="2026-01-22T00:45:38.584835318Z" level=info msg="Container 9761f6e0db9d9ca4b75e01ff2daf1870b9b154910029efca2f63fbe44b7e8dd7: CDI devices from CRI Config.CDIDevices: []" Jan 22 00:45:38.587141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3634934578.mount: Deactivated successfully. Jan 22 00:45:38.606935 containerd[2460]: 2026-01-22 00:45:38.289 [INFO][5636] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515.1.0--n--d879fbfda5-k8s-goldmane--666569f655--pmkws-eth0 goldmane-666569f655- calico-system 60deceea-d34a-4dce-b9f3-936e34e45689 804 0 2026-01-22 00:45:09 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4515.1.0-n-d879fbfda5 goldmane-666569f655-pmkws eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif3ee68d8ed2 [] [] }} ContainerID="f85e8ab486ae5fd037abe2d90f2a7b5a87a392cfc38e9f2eb74887394c8ea6e8" Namespace="calico-system" Pod="goldmane-666569f655-pmkws" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-goldmane--666569f655--pmkws-" Jan 22 00:45:38.606935 containerd[2460]: 2026-01-22 00:45:38.289 [INFO][5636] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f85e8ab486ae5fd037abe2d90f2a7b5a87a392cfc38e9f2eb74887394c8ea6e8" Namespace="calico-system" Pod="goldmane-666569f655-pmkws" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-goldmane--666569f655--pmkws-eth0" Jan 22 00:45:38.606935 containerd[2460]: 2026-01-22 00:45:38.335 [INFO][5669] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f85e8ab486ae5fd037abe2d90f2a7b5a87a392cfc38e9f2eb74887394c8ea6e8" HandleID="k8s-pod-network.f85e8ab486ae5fd037abe2d90f2a7b5a87a392cfc38e9f2eb74887394c8ea6e8" Workload="ci--4515.1.0--n--d879fbfda5-k8s-goldmane--666569f655--pmkws-eth0" Jan 22 00:45:38.606935 containerd[2460]: 2026-01-22 00:45:38.335 [INFO][5669] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f85e8ab486ae5fd037abe2d90f2a7b5a87a392cfc38e9f2eb74887394c8ea6e8" HandleID="k8s-pod-network.f85e8ab486ae5fd037abe2d90f2a7b5a87a392cfc38e9f2eb74887394c8ea6e8" Workload="ci--4515.1.0--n--d879fbfda5-k8s-goldmane--666569f655--pmkws-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5dc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515.1.0-n-d879fbfda5", "pod":"goldmane-666569f655-pmkws", "timestamp":"2026-01-22 00:45:38.335503835 +0000 UTC"}, Hostname:"ci-4515.1.0-n-d879fbfda5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 22 00:45:38.606935 containerd[2460]: 2026-01-22 00:45:38.335 [INFO][5669] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 22 00:45:38.606935 containerd[2460]: 2026-01-22 00:45:38.462 [INFO][5669] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 22 00:45:38.606935 containerd[2460]: 2026-01-22 00:45:38.462 [INFO][5669] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515.1.0-n-d879fbfda5' Jan 22 00:45:38.606935 containerd[2460]: 2026-01-22 00:45:38.526 [INFO][5669] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f85e8ab486ae5fd037abe2d90f2a7b5a87a392cfc38e9f2eb74887394c8ea6e8" host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:38.606935 containerd[2460]: 2026-01-22 00:45:38.530 [INFO][5669] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:38.606935 containerd[2460]: 2026-01-22 00:45:38.540 [INFO][5669] ipam/ipam.go 511: Trying affinity for 192.168.22.128/26 host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:38.606935 containerd[2460]: 2026-01-22 00:45:38.542 [INFO][5669] ipam/ipam.go 158: Attempting to load block cidr=192.168.22.128/26 host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:38.606935 containerd[2460]: 2026-01-22 00:45:38.544 [INFO][5669] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.22.128/26 host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:38.606935 containerd[2460]: 2026-01-22 00:45:38.544 [INFO][5669] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.22.128/26 handle="k8s-pod-network.f85e8ab486ae5fd037abe2d90f2a7b5a87a392cfc38e9f2eb74887394c8ea6e8" host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:38.606935 containerd[2460]: 2026-01-22 00:45:38.546 [INFO][5669] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f85e8ab486ae5fd037abe2d90f2a7b5a87a392cfc38e9f2eb74887394c8ea6e8 Jan 22 00:45:38.606935 containerd[2460]: 2026-01-22 00:45:38.550 [INFO][5669] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.22.128/26 handle="k8s-pod-network.f85e8ab486ae5fd037abe2d90f2a7b5a87a392cfc38e9f2eb74887394c8ea6e8" host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:38.606935 containerd[2460]: 2026-01-22 00:45:38.558 [INFO][5669] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.22.136/26] block=192.168.22.128/26 handle="k8s-pod-network.f85e8ab486ae5fd037abe2d90f2a7b5a87a392cfc38e9f2eb74887394c8ea6e8" host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:38.606935 containerd[2460]: 2026-01-22 00:45:38.558 [INFO][5669] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.22.136/26] handle="k8s-pod-network.f85e8ab486ae5fd037abe2d90f2a7b5a87a392cfc38e9f2eb74887394c8ea6e8" host="ci-4515.1.0-n-d879fbfda5" Jan 22 00:45:38.606935 containerd[2460]: 2026-01-22 00:45:38.558 [INFO][5669] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 22 00:45:38.606935 containerd[2460]: 2026-01-22 00:45:38.558 [INFO][5669] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.22.136/26] IPv6=[] ContainerID="f85e8ab486ae5fd037abe2d90f2a7b5a87a392cfc38e9f2eb74887394c8ea6e8" HandleID="k8s-pod-network.f85e8ab486ae5fd037abe2d90f2a7b5a87a392cfc38e9f2eb74887394c8ea6e8" Workload="ci--4515.1.0--n--d879fbfda5-k8s-goldmane--666569f655--pmkws-eth0" Jan 22 00:45:38.608440 containerd[2460]: 2026-01-22 00:45:38.561 [INFO][5636] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f85e8ab486ae5fd037abe2d90f2a7b5a87a392cfc38e9f2eb74887394c8ea6e8" Namespace="calico-system" Pod="goldmane-666569f655-pmkws" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-goldmane--666569f655--pmkws-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--d879fbfda5-k8s-goldmane--666569f655--pmkws-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"60deceea-d34a-4dce-b9f3-936e34e45689", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 45, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-d879fbfda5", ContainerID:"", Pod:"goldmane-666569f655-pmkws", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.22.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif3ee68d8ed2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:45:38.608440 containerd[2460]: 2026-01-22 00:45:38.561 [INFO][5636] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.22.136/32] ContainerID="f85e8ab486ae5fd037abe2d90f2a7b5a87a392cfc38e9f2eb74887394c8ea6e8" Namespace="calico-system" Pod="goldmane-666569f655-pmkws" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-goldmane--666569f655--pmkws-eth0" Jan 22 00:45:38.608440 containerd[2460]: 2026-01-22 00:45:38.561 [INFO][5636] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif3ee68d8ed2 ContainerID="f85e8ab486ae5fd037abe2d90f2a7b5a87a392cfc38e9f2eb74887394c8ea6e8" Namespace="calico-system" Pod="goldmane-666569f655-pmkws" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-goldmane--666569f655--pmkws-eth0" Jan 22 00:45:38.608440 containerd[2460]: 2026-01-22 00:45:38.566 [INFO][5636] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f85e8ab486ae5fd037abe2d90f2a7b5a87a392cfc38e9f2eb74887394c8ea6e8" Namespace="calico-system" Pod="goldmane-666569f655-pmkws" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-goldmane--666569f655--pmkws-eth0" Jan 22 00:45:38.608440 containerd[2460]: 2026-01-22 00:45:38.573 [INFO][5636] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f85e8ab486ae5fd037abe2d90f2a7b5a87a392cfc38e9f2eb74887394c8ea6e8" Namespace="calico-system" Pod="goldmane-666569f655-pmkws" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-goldmane--666569f655--pmkws-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515.1.0--n--d879fbfda5-k8s-goldmane--666569f655--pmkws-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"60deceea-d34a-4dce-b9f3-936e34e45689", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2026, time.January, 22, 0, 45, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515.1.0-n-d879fbfda5", ContainerID:"f85e8ab486ae5fd037abe2d90f2a7b5a87a392cfc38e9f2eb74887394c8ea6e8", Pod:"goldmane-666569f655-pmkws", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.22.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif3ee68d8ed2", MAC:"3e:6b:49:1f:b4:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 22 00:45:38.608440 containerd[2460]: 2026-01-22 00:45:38.602 [INFO][5636] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f85e8ab486ae5fd037abe2d90f2a7b5a87a392cfc38e9f2eb74887394c8ea6e8" Namespace="calico-system" Pod="goldmane-666569f655-pmkws" WorkloadEndpoint="ci--4515.1.0--n--d879fbfda5-k8s-goldmane--666569f655--pmkws-eth0" Jan 22 00:45:38.611458 containerd[2460]: time="2026-01-22T00:45:38.611323491Z" level=info msg="CreateContainer within sandbox \"677e44f0158b2ca035b03243736cf1f469069ca72a60ff31d475d748cbb186ba\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9761f6e0db9d9ca4b75e01ff2daf1870b9b154910029efca2f63fbe44b7e8dd7\"" Jan 22 00:45:38.612648 containerd[2460]: time="2026-01-22T00:45:38.612259012Z" level=info msg="StartContainer for \"9761f6e0db9d9ca4b75e01ff2daf1870b9b154910029efca2f63fbe44b7e8dd7\"" Jan 22 00:45:38.615011 containerd[2460]: time="2026-01-22T00:45:38.614984295Z" level=info msg="connecting to shim 9761f6e0db9d9ca4b75e01ff2daf1870b9b154910029efca2f63fbe44b7e8dd7" address="unix:///run/containerd/s/d968d9c95ed5d83c54534774f6557f74ca5e29c4da2b9525959534e977447a84" protocol=ttrpc version=3 Jan 22 00:45:38.619120 systemd[1]: Started cri-containerd-5dd25d35e1295a6bac318c6677b8ea1340017c26cd0f7860b202b68b79131a31.scope - libcontainer container 5dd25d35e1295a6bac318c6677b8ea1340017c26cd0f7860b202b68b79131a31. Jan 22 00:45:38.633000 audit[5800]: NETFILTER_CFG table=filter:140 family=2 entries=60 op=nft_register_chain pid=5800 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 22 00:45:38.633000 audit[5800]: SYSCALL arch=c000003e syscall=46 success=yes exit=29916 a0=3 a1=7ffcffc93da0 a2=0 a3=7ffcffc93d8c items=0 ppid=5048 pid=5800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:38.633000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 22 00:45:38.636886 systemd[1]: Started cri-containerd-9761f6e0db9d9ca4b75e01ff2daf1870b9b154910029efca2f63fbe44b7e8dd7.scope - libcontainer container 9761f6e0db9d9ca4b75e01ff2daf1870b9b154910029efca2f63fbe44b7e8dd7. Jan 22 00:45:38.638000 audit: BPF prog-id=264 op=LOAD Jan 22 00:45:38.639000 audit: BPF prog-id=265 op=LOAD Jan 22 00:45:38.639000 audit[5764]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=5751 pid=5764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:38.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564643235643335653132393561366261633331386336363737623865 Jan 22 00:45:38.639000 audit: BPF prog-id=265 op=UNLOAD Jan 22 00:45:38.639000 audit[5764]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5751 pid=5764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:38.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564643235643335653132393561366261633331386336363737623865 Jan 22 00:45:38.640000 audit: BPF prog-id=266 op=LOAD Jan 22 00:45:38.640000 audit[5764]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=5751 pid=5764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:38.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564643235643335653132393561366261633331386336363737623865 Jan 22 00:45:38.640000 audit: BPF prog-id=267 op=LOAD Jan 22 00:45:38.640000 audit[5764]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=5751 pid=5764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:38.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564643235643335653132393561366261633331386336363737623865 Jan 22 00:45:38.640000 audit: BPF prog-id=267 op=UNLOAD Jan 22 00:45:38.640000 audit[5764]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5751 pid=5764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:38.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564643235643335653132393561366261633331386336363737623865 Jan 22 00:45:38.640000 audit: BPF prog-id=266 op=UNLOAD Jan 22 00:45:38.640000 audit[5764]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5751 pid=5764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:38.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564643235643335653132393561366261633331386336363737623865 Jan 22 00:45:38.640000 audit: BPF prog-id=268 op=LOAD Jan 22 00:45:38.640000 audit[5764]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=5751 pid=5764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:38.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3564643235643335653132393561366261633331386336363737623865 Jan 22 00:45:38.649000 audit: BPF prog-id=269 op=LOAD Jan 22 00:45:38.650000 audit: BPF prog-id=270 op=LOAD Jan 22 00:45:38.650000 audit[5783]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=5699 pid=5783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:38.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937363166366530646239643963613462373565303166663264616631 Jan 22 00:45:38.650000 audit: BPF prog-id=270 op=UNLOAD Jan 22 00:45:38.650000 audit[5783]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5699 pid=5783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:38.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937363166366530646239643963613462373565303166663264616631 Jan 22 00:45:38.650000 audit: BPF prog-id=271 op=LOAD Jan 22 00:45:38.650000 audit[5783]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=5699 pid=5783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:38.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937363166366530646239643963613462373565303166663264616631 Jan 22 00:45:38.650000 audit: BPF prog-id=272 op=LOAD Jan 22 00:45:38.650000 audit[5783]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=5699 pid=5783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:38.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937363166366530646239643963613462373565303166663264616631 Jan 22 00:45:38.650000 audit: BPF prog-id=272 op=UNLOAD Jan 22 00:45:38.650000 audit[5783]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5699 pid=5783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:38.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937363166366530646239643963613462373565303166663264616631 Jan 22 00:45:38.650000 audit: BPF prog-id=271 op=UNLOAD Jan 22 00:45:38.650000 audit[5783]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5699 pid=5783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:38.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937363166366530646239643963613462373565303166663264616631 Jan 22 00:45:38.650000 audit: BPF prog-id=273 op=LOAD Jan 22 00:45:38.650000 audit[5783]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=5699 pid=5783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:38.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937363166366530646239643963613462373565303166663264616631 Jan 22 00:45:38.664079 containerd[2460]: time="2026-01-22T00:45:38.662783863Z" level=info msg="connecting to shim f85e8ab486ae5fd037abe2d90f2a7b5a87a392cfc38e9f2eb74887394c8ea6e8" address="unix:///run/containerd/s/203f2a2fb06c542c393612029242e5102e4a0498474398c89ccc5e75f93d8e0b" namespace=k8s.io protocol=ttrpc version=3 Jan 22 00:45:38.684845 containerd[2460]: time="2026-01-22T00:45:38.684815857Z" level=info msg="StartContainer for \"9761f6e0db9d9ca4b75e01ff2daf1870b9b154910029efca2f63fbe44b7e8dd7\" returns successfully" Jan 22 00:45:38.706029 systemd[1]: Started cri-containerd-f85e8ab486ae5fd037abe2d90f2a7b5a87a392cfc38e9f2eb74887394c8ea6e8.scope - libcontainer container f85e8ab486ae5fd037abe2d90f2a7b5a87a392cfc38e9f2eb74887394c8ea6e8. Jan 22 00:45:38.721000 audit: BPF prog-id=274 op=LOAD Jan 22 00:45:38.722000 audit: BPF prog-id=275 op=LOAD Jan 22 00:45:38.722000 audit[5839]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5819 pid=5839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:38.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638356538616234383661653566643033376162653264393066326137 Jan 22 00:45:38.722000 audit: BPF prog-id=275 op=UNLOAD Jan 22 00:45:38.722000 audit[5839]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5819 pid=5839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:38.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638356538616234383661653566643033376162653264393066326137 Jan 22 00:45:38.722000 audit: BPF prog-id=276 op=LOAD Jan 22 00:45:38.722000 audit[5839]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5819 pid=5839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:38.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638356538616234383661653566643033376162653264393066326137 Jan 22 00:45:38.722000 audit: BPF prog-id=277 op=LOAD Jan 22 00:45:38.722000 audit[5839]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5819 pid=5839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:38.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638356538616234383661653566643033376162653264393066326137 Jan 22 00:45:38.722000 audit: BPF prog-id=277 op=UNLOAD Jan 22 00:45:38.722000 audit[5839]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5819 pid=5839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:38.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638356538616234383661653566643033376162653264393066326137 Jan 22 00:45:38.722000 audit: BPF prog-id=276 op=UNLOAD Jan 22 00:45:38.722000 audit[5839]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5819 pid=5839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:38.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638356538616234383661653566643033376162653264393066326137 Jan 22 00:45:38.723000 audit: BPF prog-id=278 op=LOAD Jan 22 00:45:38.723000 audit[5839]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5819 pid=5839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:38.723000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638356538616234383661653566643033376162653264393066326137 Jan 22 00:45:38.736472 containerd[2460]: time="2026-01-22T00:45:38.736448392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68764f557f-r6nmq,Uid:77488da9-2016-4a7d-b29a-dee9cf79fa65,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5dd25d35e1295a6bac318c6677b8ea1340017c26cd0f7860b202b68b79131a31\"" Jan 22 00:45:38.738569 containerd[2460]: time="2026-01-22T00:45:38.738545775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 22 00:45:38.760005 containerd[2460]: time="2026-01-22T00:45:38.759981169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-pmkws,Uid:60deceea-d34a-4dce-b9f3-936e34e45689,Namespace:calico-system,Attempt:0,} returns sandbox id \"f85e8ab486ae5fd037abe2d90f2a7b5a87a392cfc38e9f2eb74887394c8ea6e8\"" Jan 22 00:45:38.977517 containerd[2460]: time="2026-01-22T00:45:38.977342641Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:45:38.980572 containerd[2460]: time="2026-01-22T00:45:38.980541712Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 22 00:45:38.980683 containerd[2460]: time="2026-01-22T00:45:38.980619638Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 22 00:45:38.980795 kubelet[3924]: E0122 00:45:38.980757 3924 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:45:38.981520 kubelet[3924]: E0122 00:45:38.980809 3924 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:45:38.981520 kubelet[3924]: E0122 00:45:38.981166 3924 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tsqnq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68764f557f-r6nmq_calico-apiserver(77488da9-2016-4a7d-b29a-dee9cf79fa65): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 22 00:45:38.981687 containerd[2460]: time="2026-01-22T00:45:38.981220877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 22 00:45:38.983246 kubelet[3924]: E0122 00:45:38.983214 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68764f557f-r6nmq" podUID="77488da9-2016-4a7d-b29a-dee9cf79fa65" Jan 22 00:45:39.236130 containerd[2460]: time="2026-01-22T00:45:39.236017583Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:45:39.240885 containerd[2460]: time="2026-01-22T00:45:39.240846641Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 22 00:45:39.241172 containerd[2460]: time="2026-01-22T00:45:39.240881133Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 22 00:45:39.241283 kubelet[3924]: E0122 00:45:39.241254 3924 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 22 00:45:39.241364 kubelet[3924]: E0122 00:45:39.241349 3924 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 22 00:45:39.241600 kubelet[3924]: E0122 00:45:39.241557 3924 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-98q7m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-pmkws_calico-system(60deceea-d34a-4dce-b9f3-936e34e45689): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 22 00:45:39.242875 kubelet[3924]: E0122 00:45:39.242849 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pmkws" podUID="60deceea-d34a-4dce-b9f3-936e34e45689" Jan 22 00:45:39.267909 systemd-networkd[2242]: cali7156e5bc67e: Gained IPv6LL Jan 22 00:45:39.353245 kubelet[3924]: E0122 00:45:39.353209 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68764f557f-r6nmq" podUID="77488da9-2016-4a7d-b29a-dee9cf79fa65" Jan 22 00:45:39.359717 kubelet[3924]: E0122 00:45:39.359339 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pmkws" podUID="60deceea-d34a-4dce-b9f3-936e34e45689" Jan 22 00:45:39.361610 kubelet[3924]: E0122 00:45:39.361584 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5555c47f4f-szgpn" podUID="1c96102f-5e79-4a6e-9dde-550f505c5961" Jan 22 00:45:39.390000 audit[5875]: NETFILTER_CFG table=filter:141 family=2 entries=14 op=nft_register_rule pid=5875 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:45:39.390000 audit[5875]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff688ce850 a2=0 a3=7fff688ce83c items=0 ppid=4073 pid=5875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:39.390000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:45:39.402000 audit[5875]: NETFILTER_CFG table=nat:142 family=2 entries=20 op=nft_register_rule pid=5875 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:45:39.402000 audit[5875]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff688ce850 a2=0 a3=7fff688ce83c items=0 ppid=4073 pid=5875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:39.402000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:45:39.413997 kubelet[3924]: I0122 00:45:39.413954 3924 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7ms67" podStartSLOduration=43.413938282 podStartE2EDuration="43.413938282s" podCreationTimestamp="2026-01-22 00:44:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 00:45:39.413496978 +0000 UTC m=+48.319730668" watchObservedRunningTime="2026-01-22 00:45:39.413938282 +0000 UTC m=+48.320171983" Jan 22 00:45:39.415000 audit[5877]: NETFILTER_CFG table=filter:143 family=2 entries=14 op=nft_register_rule pid=5877 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:45:39.415000 audit[5877]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc187b4c10 a2=0 a3=7ffc187b4bfc items=0 ppid=4073 pid=5877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:39.415000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:45:39.420000 audit[5877]: NETFILTER_CFG table=nat:144 family=2 entries=20 op=nft_register_rule pid=5877 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:45:39.420000 audit[5877]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc187b4c10 a2=0 a3=7ffc187b4bfc items=0 ppid=4073 pid=5877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:39.420000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:45:39.971900 systemd-networkd[2242]: calif3ee68d8ed2: Gained IPv6LL Jan 22 00:45:40.292043 systemd-networkd[2242]: cali7d5b966f855: Gained IPv6LL Jan 22 00:45:40.292866 systemd-networkd[2242]: cali93ae976d145: Gained IPv6LL Jan 22 00:45:40.365075 kubelet[3924]: E0122 00:45:40.365039 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pmkws" podUID="60deceea-d34a-4dce-b9f3-936e34e45689" Jan 22 00:45:40.366462 kubelet[3924]: E0122 00:45:40.365357 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68764f557f-r6nmq" podUID="77488da9-2016-4a7d-b29a-dee9cf79fa65" Jan 22 00:45:40.421000 audit[5879]: NETFILTER_CFG table=filter:145 family=2 entries=14 op=nft_register_rule pid=5879 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:45:40.424172 kernel: kauditd_printk_skb: 214 callbacks suppressed Jan 22 00:45:40.424234 kernel: audit: type=1325 audit(1769042740.421:754): table=filter:145 family=2 entries=14 op=nft_register_rule pid=5879 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:45:40.421000 audit[5879]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe4e6432d0 a2=0 a3=7ffe4e6432bc items=0 ppid=4073 pid=5879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:40.428810 kernel: audit: type=1300 audit(1769042740.421:754): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe4e6432d0 a2=0 a3=7ffe4e6432bc items=0 ppid=4073 pid=5879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:40.421000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:45:40.433858 kernel: audit: type=1327 audit(1769042740.421:754): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:45:40.440000 audit[5879]: NETFILTER_CFG table=nat:146 family=2 entries=56 op=nft_register_chain pid=5879 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:45:40.440000 audit[5879]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffe4e6432d0 a2=0 a3=7ffe4e6432bc items=0 ppid=4073 pid=5879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:40.445757 kernel: audit: type=1325 audit(1769042740.440:755): table=nat:146 family=2 entries=56 op=nft_register_chain pid=5879 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:45:40.445789 kernel: audit: type=1300 audit(1769042740.440:755): arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffe4e6432d0 a2=0 a3=7ffe4e6432bc items=0 ppid=4073 pid=5879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:45:40.440000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:45:40.453690 kernel: audit: type=1327 audit(1769042740.440:755): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:45:44.185922 containerd[2460]: time="2026-01-22T00:45:44.185870906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 22 00:45:44.445776 containerd[2460]: time="2026-01-22T00:45:44.445637577Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:45:44.454668 containerd[2460]: time="2026-01-22T00:45:44.454625073Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 22 00:45:44.454996 containerd[2460]: time="2026-01-22T00:45:44.454708025Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 22 00:45:44.455061 kubelet[3924]: E0122 00:45:44.454843 3924 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 22 00:45:44.455061 kubelet[3924]: E0122 00:45:44.454884 3924 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 22 00:45:44.455061 kubelet[3924]: E0122 00:45:44.455017 3924 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:27c137ff87524ee5bb8d79863905fd63,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dckdv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-759c5b6477-kxt5n_calico-system(c7ca39d8-cd5c-4ad6-a84c-aac52a3306f1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 22 00:45:44.458030 containerd[2460]: time="2026-01-22T00:45:44.457999305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 22 00:45:44.705248 containerd[2460]: time="2026-01-22T00:45:44.705113205Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:45:44.707927 containerd[2460]: time="2026-01-22T00:45:44.707875630Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 22 00:45:44.708042 containerd[2460]: time="2026-01-22T00:45:44.707880558Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 22 00:45:44.708145 kubelet[3924]: E0122 00:45:44.708111 3924 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 22 00:45:44.708215 kubelet[3924]: E0122 00:45:44.708158 3924 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 22 00:45:44.708346 kubelet[3924]: E0122 00:45:44.708294 3924 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dckdv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-759c5b6477-kxt5n_calico-system(c7ca39d8-cd5c-4ad6-a84c-aac52a3306f1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 22 00:45:44.709527 kubelet[3924]: E0122 00:45:44.709438 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-759c5b6477-kxt5n" podUID="c7ca39d8-cd5c-4ad6-a84c-aac52a3306f1" Jan 22 00:45:50.186661 containerd[2460]: time="2026-01-22T00:45:50.186142501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 22 00:45:50.447214 containerd[2460]: time="2026-01-22T00:45:50.447086218Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:45:50.449816 containerd[2460]: time="2026-01-22T00:45:50.449782719Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 22 00:45:50.449897 containerd[2460]: time="2026-01-22T00:45:50.449853786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 22 00:45:50.450009 kubelet[3924]: E0122 00:45:50.449970 3924 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:45:50.450288 kubelet[3924]: E0122 00:45:50.450261 3924 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:45:50.450727 kubelet[3924]: E0122 00:45:50.450416 3924 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f8jng,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68764f557f-l8tpm_calico-apiserver(dd066ec6-7b8c-4975-b067-940020b582cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 22 00:45:50.451638 kubelet[3924]: E0122 00:45:50.451609 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68764f557f-l8tpm" podUID="dd066ec6-7b8c-4975-b067-940020b582cf" Jan 22 00:45:52.185762 containerd[2460]: time="2026-01-22T00:45:52.185612646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 22 00:45:52.438387 containerd[2460]: time="2026-01-22T00:45:52.438247962Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:45:52.442495 containerd[2460]: time="2026-01-22T00:45:52.442426380Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 22 00:45:52.442495 containerd[2460]: time="2026-01-22T00:45:52.442475001Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 22 00:45:52.442630 kubelet[3924]: E0122 00:45:52.442593 3924 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 22 00:45:52.442968 kubelet[3924]: E0122 00:45:52.442640 3924 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 22 00:45:52.442968 kubelet[3924]: E0122 00:45:52.442778 3924 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bdmdn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-78h9h_calico-system(222ac10e-a19c-48d7-ba2a-f1cdbf34cf86): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 22 00:45:52.445153 containerd[2460]: time="2026-01-22T00:45:52.445124360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 22 00:45:52.699640 containerd[2460]: time="2026-01-22T00:45:52.699516130Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:45:52.702755 containerd[2460]: time="2026-01-22T00:45:52.702644391Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 22 00:45:52.702971 containerd[2460]: time="2026-01-22T00:45:52.702854843Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 22 00:45:52.703431 kubelet[3924]: E0122 00:45:52.703384 3924 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 22 00:45:52.703596 kubelet[3924]: E0122 00:45:52.703524 3924 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 22 00:45:52.703838 kubelet[3924]: E0122 00:45:52.703712 3924 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bdmdn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-78h9h_calico-system(222ac10e-a19c-48d7-ba2a-f1cdbf34cf86): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 22 00:45:52.705195 kubelet[3924]: E0122 00:45:52.705146 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-78h9h" podUID="222ac10e-a19c-48d7-ba2a-f1cdbf34cf86" Jan 22 00:45:53.186495 containerd[2460]: time="2026-01-22T00:45:53.186251232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 22 00:45:53.439881 containerd[2460]: time="2026-01-22T00:45:53.439769823Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:45:53.442478 containerd[2460]: time="2026-01-22T00:45:53.442438952Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 22 00:45:53.442541 containerd[2460]: time="2026-01-22T00:45:53.442511134Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 22 00:45:53.442655 kubelet[3924]: E0122 00:45:53.442617 3924 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 22 00:45:53.442934 kubelet[3924]: E0122 00:45:53.442765 3924 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 22 00:45:53.443070 kubelet[3924]: E0122 00:45:53.443030 3924 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7h22j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5555c47f4f-szgpn_calico-system(1c96102f-5e79-4a6e-9dde-550f505c5961): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 22 00:45:53.443546 containerd[2460]: time="2026-01-22T00:45:53.443507723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 22 00:45:53.444993 kubelet[3924]: E0122 00:45:53.444957 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5555c47f4f-szgpn" podUID="1c96102f-5e79-4a6e-9dde-550f505c5961" Jan 22 00:45:53.708877 containerd[2460]: time="2026-01-22T00:45:53.708712784Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:45:53.711758 containerd[2460]: time="2026-01-22T00:45:53.711645118Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 22 00:45:53.711758 containerd[2460]: time="2026-01-22T00:45:53.711670845Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 22 00:45:53.711924 kubelet[3924]: E0122 00:45:53.711882 3924 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 22 00:45:53.711967 kubelet[3924]: E0122 00:45:53.711936 3924 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 22 00:45:53.712105 kubelet[3924]: E0122 00:45:53.712067 3924 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-98q7m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-pmkws_calico-system(60deceea-d34a-4dce-b9f3-936e34e45689): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 22 00:45:53.713589 kubelet[3924]: E0122 00:45:53.713503 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pmkws" podUID="60deceea-d34a-4dce-b9f3-936e34e45689" Jan 22 00:45:54.186390 containerd[2460]: time="2026-01-22T00:45:54.185763745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 22 00:45:54.433476 containerd[2460]: time="2026-01-22T00:45:54.433432721Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:45:54.452319 containerd[2460]: time="2026-01-22T00:45:54.451914593Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 22 00:45:54.452319 containerd[2460]: time="2026-01-22T00:45:54.451945343Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 22 00:45:54.452445 kubelet[3924]: E0122 00:45:54.452086 3924 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:45:54.452445 kubelet[3924]: E0122 00:45:54.452130 3924 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:45:54.452445 kubelet[3924]: E0122 00:45:54.452258 3924 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tsqnq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68764f557f-r6nmq_calico-apiserver(77488da9-2016-4a7d-b29a-dee9cf79fa65): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 22 00:45:54.453509 kubelet[3924]: E0122 00:45:54.453443 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68764f557f-r6nmq" podUID="77488da9-2016-4a7d-b29a-dee9cf79fa65" Jan 22 00:46:00.186176 kubelet[3924]: E0122 00:46:00.185704 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-759c5b6477-kxt5n" podUID="c7ca39d8-cd5c-4ad6-a84c-aac52a3306f1" Jan 22 00:46:05.185288 kubelet[3924]: E0122 00:46:05.185188 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68764f557f-l8tpm" podUID="dd066ec6-7b8c-4975-b067-940020b582cf" Jan 22 00:46:06.186486 kubelet[3924]: E0122 00:46:06.186372 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-78h9h" podUID="222ac10e-a19c-48d7-ba2a-f1cdbf34cf86" Jan 22 00:46:08.188406 kubelet[3924]: E0122 00:46:08.186357 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68764f557f-r6nmq" podUID="77488da9-2016-4a7d-b29a-dee9cf79fa65" Jan 22 00:46:08.188406 kubelet[3924]: E0122 00:46:08.186357 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pmkws" podUID="60deceea-d34a-4dce-b9f3-936e34e45689" Jan 22 00:46:09.187982 kubelet[3924]: E0122 00:46:09.187916 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5555c47f4f-szgpn" podUID="1c96102f-5e79-4a6e-9dde-550f505c5961" Jan 22 00:46:15.186997 containerd[2460]: time="2026-01-22T00:46:15.186875338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 22 00:46:15.482527 containerd[2460]: time="2026-01-22T00:46:15.482269008Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:46:15.485920 containerd[2460]: time="2026-01-22T00:46:15.485851528Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 22 00:46:15.486134 containerd[2460]: time="2026-01-22T00:46:15.485885818Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 22 00:46:15.486756 kubelet[3924]: E0122 00:46:15.486263 3924 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 22 00:46:15.486756 kubelet[3924]: E0122 00:46:15.486311 3924 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 22 00:46:15.486756 kubelet[3924]: E0122 00:46:15.486425 3924 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:27c137ff87524ee5bb8d79863905fd63,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dckdv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-759c5b6477-kxt5n_calico-system(c7ca39d8-cd5c-4ad6-a84c-aac52a3306f1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 22 00:46:15.490192 containerd[2460]: time="2026-01-22T00:46:15.490162612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 22 00:46:15.736538 containerd[2460]: time="2026-01-22T00:46:15.736387666Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:46:15.740970 containerd[2460]: time="2026-01-22T00:46:15.740162306Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 22 00:46:15.741164 containerd[2460]: time="2026-01-22T00:46:15.740194094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 22 00:46:15.741363 kubelet[3924]: E0122 00:46:15.741325 3924 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 22 00:46:15.741414 kubelet[3924]: E0122 00:46:15.741377 3924 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 22 00:46:15.741529 kubelet[3924]: E0122 00:46:15.741497 3924 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dckdv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-759c5b6477-kxt5n_calico-system(c7ca39d8-cd5c-4ad6-a84c-aac52a3306f1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 22 00:46:15.743863 kubelet[3924]: E0122 00:46:15.743824 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-759c5b6477-kxt5n" podUID="c7ca39d8-cd5c-4ad6-a84c-aac52a3306f1" Jan 22 00:46:18.186525 containerd[2460]: time="2026-01-22T00:46:18.186456461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 22 00:46:18.431431 containerd[2460]: time="2026-01-22T00:46:18.431384817Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:46:18.439120 containerd[2460]: time="2026-01-22T00:46:18.438904528Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 22 00:46:18.439120 containerd[2460]: time="2026-01-22T00:46:18.438978010Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 22 00:46:18.439721 kubelet[3924]: E0122 00:46:18.439092 3924 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:46:18.439721 kubelet[3924]: E0122 00:46:18.439288 3924 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:46:18.439721 kubelet[3924]: E0122 00:46:18.439429 3924 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f8jng,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68764f557f-l8tpm_calico-apiserver(dd066ec6-7b8c-4975-b067-940020b582cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 22 00:46:18.441697 kubelet[3924]: E0122 00:46:18.441602 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68764f557f-l8tpm" podUID="dd066ec6-7b8c-4975-b067-940020b582cf" Jan 22 00:46:20.186069 containerd[2460]: time="2026-01-22T00:46:20.186026826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 22 00:46:20.439987 containerd[2460]: time="2026-01-22T00:46:20.439862392Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:46:20.443348 containerd[2460]: time="2026-01-22T00:46:20.443306102Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 22 00:46:20.443455 containerd[2460]: time="2026-01-22T00:46:20.443364534Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 22 00:46:20.443486 kubelet[3924]: E0122 00:46:20.443439 3924 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 22 00:46:20.443486 kubelet[3924]: E0122 00:46:20.443476 3924 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 22 00:46:20.443818 kubelet[3924]: E0122 00:46:20.443597 3924 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bdmdn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-78h9h_calico-system(222ac10e-a19c-48d7-ba2a-f1cdbf34cf86): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 22 00:46:20.445942 containerd[2460]: time="2026-01-22T00:46:20.445905140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 22 00:46:20.684409 containerd[2460]: time="2026-01-22T00:46:20.684246431Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:46:20.687476 containerd[2460]: time="2026-01-22T00:46:20.687351646Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 22 00:46:20.687476 containerd[2460]: time="2026-01-22T00:46:20.687451936Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 22 00:46:20.687783 kubelet[3924]: E0122 00:46:20.687747 3924 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 22 00:46:20.687844 kubelet[3924]: E0122 00:46:20.687795 3924 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 22 00:46:20.687954 kubelet[3924]: E0122 00:46:20.687915 3924 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bdmdn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-78h9h_calico-system(222ac10e-a19c-48d7-ba2a-f1cdbf34cf86): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 22 00:46:20.689409 kubelet[3924]: E0122 00:46:20.689359 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-78h9h" podUID="222ac10e-a19c-48d7-ba2a-f1cdbf34cf86" Jan 22 00:46:21.187305 containerd[2460]: time="2026-01-22T00:46:21.187254124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 22 00:46:21.472029 containerd[2460]: time="2026-01-22T00:46:21.471544053Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:46:21.475412 containerd[2460]: time="2026-01-22T00:46:21.475290797Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 22 00:46:21.475412 containerd[2460]: time="2026-01-22T00:46:21.475380567Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 22 00:46:21.475702 kubelet[3924]: E0122 00:46:21.475660 3924 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:46:21.476000 kubelet[3924]: E0122 00:46:21.475701 3924 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:46:21.476000 kubelet[3924]: E0122 00:46:21.475931 3924 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tsqnq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68764f557f-r6nmq_calico-apiserver(77488da9-2016-4a7d-b29a-dee9cf79fa65): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 22 00:46:21.476942 containerd[2460]: time="2026-01-22T00:46:21.476551717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 22 00:46:21.477650 kubelet[3924]: E0122 00:46:21.477619 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68764f557f-r6nmq" podUID="77488da9-2016-4a7d-b29a-dee9cf79fa65" Jan 22 00:46:21.722641 containerd[2460]: time="2026-01-22T00:46:21.722516284Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:46:21.726247 containerd[2460]: time="2026-01-22T00:46:21.726199469Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 22 00:46:21.726352 containerd[2460]: time="2026-01-22T00:46:21.726284188Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 22 00:46:21.726442 kubelet[3924]: E0122 00:46:21.726404 3924 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 22 00:46:21.726496 kubelet[3924]: E0122 00:46:21.726456 3924 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 22 00:46:21.726630 kubelet[3924]: E0122 00:46:21.726588 3924 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7h22j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5555c47f4f-szgpn_calico-system(1c96102f-5e79-4a6e-9dde-550f505c5961): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 22 00:46:21.728004 kubelet[3924]: E0122 00:46:21.727923 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5555c47f4f-szgpn" podUID="1c96102f-5e79-4a6e-9dde-550f505c5961" Jan 22 00:46:22.187173 containerd[2460]: time="2026-01-22T00:46:22.187130037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 22 00:46:22.432252 containerd[2460]: time="2026-01-22T00:46:22.432059573Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:46:22.435545 containerd[2460]: time="2026-01-22T00:46:22.435431619Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 22 00:46:22.435545 containerd[2460]: time="2026-01-22T00:46:22.435467932Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 22 00:46:22.435947 kubelet[3924]: E0122 00:46:22.435873 3924 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 22 00:46:22.435947 kubelet[3924]: E0122 00:46:22.435929 3924 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 22 00:46:22.436204 kubelet[3924]: E0122 00:46:22.436163 3924 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-98q7m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-pmkws_calico-system(60deceea-d34a-4dce-b9f3-936e34e45689): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 22 00:46:22.437676 kubelet[3924]: E0122 00:46:22.437469 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pmkws" podUID="60deceea-d34a-4dce-b9f3-936e34e45689" Jan 22 00:46:27.188311 kubelet[3924]: E0122 00:46:27.188131 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-759c5b6477-kxt5n" podUID="c7ca39d8-cd5c-4ad6-a84c-aac52a3306f1" Jan 22 00:46:30.187390 kubelet[3924]: E0122 00:46:30.187327 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68764f557f-l8tpm" podUID="dd066ec6-7b8c-4975-b067-940020b582cf" Jan 22 00:46:32.187461 kubelet[3924]: E0122 00:46:32.187031 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5555c47f4f-szgpn" podUID="1c96102f-5e79-4a6e-9dde-550f505c5961" Jan 22 00:46:33.186981 kubelet[3924]: E0122 00:46:33.186727 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68764f557f-r6nmq" podUID="77488da9-2016-4a7d-b29a-dee9cf79fa65" Jan 22 00:46:34.186183 kubelet[3924]: E0122 00:46:34.186102 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pmkws" podUID="60deceea-d34a-4dce-b9f3-936e34e45689" Jan 22 00:46:35.195771 kubelet[3924]: E0122 00:46:35.194283 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-78h9h" podUID="222ac10e-a19c-48d7-ba2a-f1cdbf34cf86" Jan 22 00:46:35.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.28:22-10.200.16.10:51460 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:46:35.834258 systemd[1]: Started sshd@7-10.200.8.28:22-10.200.16.10:51460.service - OpenSSH per-connection server daemon (10.200.16.10:51460). Jan 22 00:46:35.841797 kernel: audit: type=1130 audit(1769042795.832:756): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.28:22-10.200.16.10:51460 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:46:36.414863 sshd[5976]: Accepted publickey for core from 10.200.16.10 port 51460 ssh2: RSA SHA256:hQipMGMdtaSZ7b92HZmOgUPWWHTKhAP4uTxbuEjU9iU Jan 22 00:46:36.426135 kernel: audit: type=1101 audit(1769042796.413:757): pid=5976 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:36.413000 audit[5976]: USER_ACCT pid=5976 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:36.427112 sshd-session[5976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:46:36.424000 audit[5976]: CRED_ACQ pid=5976 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:36.441761 kernel: audit: type=1103 audit(1769042796.424:758): pid=5976 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:36.443630 systemd-logind[2425]: New session 10 of user core. Jan 22 00:46:36.449750 kernel: audit: type=1006 audit(1769042796.424:759): pid=5976 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 22 00:46:36.450930 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 22 00:46:36.424000 audit[5976]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff176454d0 a2=3 a3=0 items=0 ppid=1 pid=5976 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:46:36.459793 kernel: audit: type=1300 audit(1769042796.424:759): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff176454d0 a2=3 a3=0 items=0 ppid=1 pid=5976 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:46:36.424000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:46:36.468801 kernel: audit: type=1327 audit(1769042796.424:759): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:46:36.462000 audit[5976]: USER_START pid=5976 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:36.478763 kernel: audit: type=1105 audit(1769042796.462:760): pid=5976 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:36.465000 audit[5979]: CRED_ACQ pid=5979 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:36.489789 kernel: audit: type=1103 audit(1769042796.465:761): pid=5979 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:36.817854 sshd[5979]: Connection closed by 10.200.16.10 port 51460 Jan 22 00:46:36.819758 sshd-session[5976]: pam_unix(sshd:session): session closed for user core Jan 22 00:46:36.819000 audit[5976]: USER_END pid=5976 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:36.825068 systemd-logind[2425]: Session 10 logged out. Waiting for processes to exit. Jan 22 00:46:36.825585 systemd[1]: sshd@7-10.200.8.28:22-10.200.16.10:51460.service: Deactivated successfully. Jan 22 00:46:36.831434 kernel: audit: type=1106 audit(1769042796.819:762): pid=5976 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:36.830564 systemd[1]: session-10.scope: Deactivated successfully. Jan 22 00:46:36.819000 audit[5976]: CRED_DISP pid=5976 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:36.835283 systemd-logind[2425]: Removed session 10. Jan 22 00:46:36.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.28:22-10.200.16.10:51460 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:46:36.842749 kernel: audit: type=1104 audit(1769042796.819:763): pid=5976 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:41.189767 kubelet[3924]: E0122 00:46:41.188994 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-759c5b6477-kxt5n" podUID="c7ca39d8-cd5c-4ad6-a84c-aac52a3306f1" Jan 22 00:46:41.943479 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 22 00:46:41.943591 kernel: audit: type=1130 audit(1769042801.941:765): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.28:22-10.200.16.10:60532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:46:41.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.28:22-10.200.16.10:60532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:46:41.942187 systemd[1]: Started sshd@8-10.200.8.28:22-10.200.16.10:60532.service - OpenSSH per-connection server daemon (10.200.16.10:60532). Jan 22 00:46:42.532000 audit[5992]: USER_ACCT pid=5992 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:42.533222 sshd[5992]: Accepted publickey for core from 10.200.16.10 port 60532 ssh2: RSA SHA256:hQipMGMdtaSZ7b92HZmOgUPWWHTKhAP4uTxbuEjU9iU Jan 22 00:46:42.537200 sshd-session[5992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:46:42.539261 kernel: audit: type=1101 audit(1769042802.532:766): pid=5992 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:42.539330 kernel: audit: type=1103 audit(1769042802.535:767): pid=5992 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:42.535000 audit[5992]: CRED_ACQ pid=5992 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:42.545410 systemd-logind[2425]: New session 11 of user core. Jan 22 00:46:42.548218 kernel: audit: type=1006 audit(1769042802.536:768): pid=5992 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 22 00:46:42.550809 kernel: audit: type=1300 audit(1769042802.536:768): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4431a820 a2=3 a3=0 items=0 ppid=1 pid=5992 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:46:42.536000 audit[5992]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4431a820 a2=3 a3=0 items=0 ppid=1 pid=5992 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:46:42.554896 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 22 00:46:42.536000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:46:42.557000 audit[5992]: USER_START pid=5992 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:42.564620 kernel: audit: type=1327 audit(1769042802.536:768): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:46:42.564677 kernel: audit: type=1105 audit(1769042802.557:769): pid=5992 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:42.566000 audit[5995]: CRED_ACQ pid=5995 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:42.570020 kernel: audit: type=1103 audit(1769042802.566:770): pid=5995 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:42.907760 sshd[5995]: Connection closed by 10.200.16.10 port 60532 Jan 22 00:46:42.908256 sshd-session[5992]: pam_unix(sshd:session): session closed for user core Jan 22 00:46:42.913132 systemd-logind[2425]: Session 11 logged out. Waiting for processes to exit. Jan 22 00:46:42.910000 audit[5992]: USER_END pid=5992 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:42.918754 kernel: audit: type=1106 audit(1769042802.910:771): pid=5992 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:42.919488 systemd[1]: sshd@8-10.200.8.28:22-10.200.16.10:60532.service: Deactivated successfully. Jan 22 00:46:42.923996 systemd[1]: session-11.scope: Deactivated successfully. Jan 22 00:46:42.925767 systemd-logind[2425]: Removed session 11. Jan 22 00:46:42.910000 audit[5992]: CRED_DISP pid=5992 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:42.931761 kernel: audit: type=1104 audit(1769042802.910:772): pid=5992 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:42.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.28:22-10.200.16.10:60532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:46:43.189886 kubelet[3924]: E0122 00:46:43.188318 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68764f557f-l8tpm" podUID="dd066ec6-7b8c-4975-b067-940020b582cf" Jan 22 00:46:46.185960 kubelet[3924]: E0122 00:46:46.185911 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5555c47f4f-szgpn" podUID="1c96102f-5e79-4a6e-9dde-550f505c5961" Jan 22 00:46:47.186506 kubelet[3924]: E0122 00:46:47.186426 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pmkws" podUID="60deceea-d34a-4dce-b9f3-936e34e45689" Jan 22 00:46:48.037037 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 22 00:46:48.037168 kernel: audit: type=1130 audit(1769042808.029:774): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.28:22-10.200.16.10:60546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:46:48.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.28:22-10.200.16.10:60546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:46:48.030054 systemd[1]: Started sshd@9-10.200.8.28:22-10.200.16.10:60546.service - OpenSSH per-connection server daemon (10.200.16.10:60546). Jan 22 00:46:48.186168 kubelet[3924]: E0122 00:46:48.185911 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68764f557f-r6nmq" podUID="77488da9-2016-4a7d-b29a-dee9cf79fa65" Jan 22 00:46:48.187537 kubelet[3924]: E0122 00:46:48.187470 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-78h9h" podUID="222ac10e-a19c-48d7-ba2a-f1cdbf34cf86" Jan 22 00:46:48.626000 audit[6008]: USER_ACCT pid=6008 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:48.631043 sshd-session[6008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:46:48.632255 sshd[6008]: Accepted publickey for core from 10.200.16.10 port 60546 ssh2: RSA SHA256:hQipMGMdtaSZ7b92HZmOgUPWWHTKhAP4uTxbuEjU9iU Jan 22 00:46:48.629000 audit[6008]: CRED_ACQ pid=6008 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:48.635191 kernel: audit: type=1101 audit(1769042808.626:775): pid=6008 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:48.635258 kernel: audit: type=1103 audit(1769042808.629:776): pid=6008 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:48.629000 audit[6008]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb07ba8f0 a2=3 a3=0 items=0 ppid=1 pid=6008 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:46:48.642721 kernel: audit: type=1006 audit(1769042808.629:777): pid=6008 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 22 00:46:48.642815 kernel: audit: type=1300 audit(1769042808.629:777): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb07ba8f0 a2=3 a3=0 items=0 ppid=1 pid=6008 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:46:48.644134 systemd-logind[2425]: New session 12 of user core. Jan 22 00:46:48.629000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:46:48.647131 kernel: audit: type=1327 audit(1769042808.629:777): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:46:48.650958 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 22 00:46:48.652000 audit[6008]: USER_START pid=6008 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:48.660973 kernel: audit: type=1105 audit(1769042808.652:778): pid=6008 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:48.661044 kernel: audit: type=1103 audit(1769042808.659:779): pid=6011 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:48.659000 audit[6011]: CRED_ACQ pid=6011 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:49.002821 sshd[6011]: Connection closed by 10.200.16.10 port 60546 Jan 22 00:46:49.004288 sshd-session[6008]: pam_unix(sshd:session): session closed for user core Jan 22 00:46:49.004000 audit[6008]: USER_END pid=6008 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:49.011395 systemd[1]: sshd@9-10.200.8.28:22-10.200.16.10:60546.service: Deactivated successfully. Jan 22 00:46:49.004000 audit[6008]: CRED_DISP pid=6008 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:49.015202 systemd[1]: session-12.scope: Deactivated successfully. Jan 22 00:46:49.017426 systemd-logind[2425]: Session 12 logged out. Waiting for processes to exit. Jan 22 00:46:49.018633 systemd-logind[2425]: Removed session 12. Jan 22 00:46:49.019035 kernel: audit: type=1106 audit(1769042809.004:780): pid=6008 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:49.019101 kernel: audit: type=1104 audit(1769042809.004:781): pid=6008 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:49.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.28:22-10.200.16.10:60546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:46:49.122042 systemd[1]: Started sshd@10-10.200.8.28:22-10.200.16.10:60548.service - OpenSSH per-connection server daemon (10.200.16.10:60548). Jan 22 00:46:49.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.8.28:22-10.200.16.10:60548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:46:49.705000 audit[6024]: USER_ACCT pid=6024 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:49.707215 sshd[6024]: Accepted publickey for core from 10.200.16.10 port 60548 ssh2: RSA SHA256:hQipMGMdtaSZ7b92HZmOgUPWWHTKhAP4uTxbuEjU9iU Jan 22 00:46:49.708422 sshd-session[6024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:46:49.707000 audit[6024]: CRED_ACQ pid=6024 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:49.707000 audit[6024]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc8425c00 a2=3 a3=0 items=0 ppid=1 pid=6024 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:46:49.707000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:46:49.714572 systemd-logind[2425]: New session 13 of user core. Jan 22 00:46:49.720929 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 22 00:46:49.723000 audit[6024]: USER_START pid=6024 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:49.725000 audit[6027]: CRED_ACQ pid=6027 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:50.027516 waagent[2665]: 2026-01-22T00:46:50.027384Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 22 00:46:50.037531 waagent[2665]: 2026-01-22T00:46:50.037283Z INFO ExtHandler Jan 22 00:46:50.037637 waagent[2665]: 2026-01-22T00:46:50.037513Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 22 00:46:50.095368 waagent[2665]: 2026-01-22T00:46:50.095336Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 22 00:46:50.104503 sshd[6027]: Connection closed by 10.200.16.10 port 60548 Jan 22 00:46:50.103406 sshd-session[6024]: pam_unix(sshd:session): session closed for user core Jan 22 00:46:50.104000 audit[6024]: USER_END pid=6024 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:50.104000 audit[6024]: CRED_DISP pid=6024 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:50.108293 systemd-logind[2425]: Session 13 logged out. Waiting for processes to exit. Jan 22 00:46:50.109182 systemd[1]: sshd@10-10.200.8.28:22-10.200.16.10:60548.service: Deactivated successfully. Jan 22 00:46:50.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.8.28:22-10.200.16.10:60548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:46:50.112585 systemd[1]: session-13.scope: Deactivated successfully. Jan 22 00:46:50.114905 systemd-logind[2425]: Removed session 13. Jan 22 00:46:50.228779 waagent[2665]: 2026-01-22T00:46:50.227684Z INFO ExtHandler Downloaded certificate {'thumbprint': 'AB1CF2332987C44E3D7091599DD7EBDDF2FAC0B5', 'hasPrivateKey': True} Jan 22 00:46:50.228779 waagent[2665]: 2026-01-22T00:46:50.228173Z INFO ExtHandler Fetch goal state completed Jan 22 00:46:50.228779 waagent[2665]: 2026-01-22T00:46:50.228450Z INFO ExtHandler ExtHandler Jan 22 00:46:50.228779 waagent[2665]: 2026-01-22T00:46:50.228484Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 3dc04b7e-9e48-4499-9222-1224d212b4bf correlation f1441f51-08d8-48f1-b3ee-114308b56dba created: 2026-01-22T00:46:45.985970Z] Jan 22 00:46:50.228779 waagent[2665]: 2026-01-22T00:46:50.228687Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 22 00:46:50.229412 waagent[2665]: 2026-01-22T00:46:50.229386Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 0 ms] Jan 22 00:46:50.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.8.28:22-10.200.16.10:51380 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:46:50.261433 systemd[1]: Started sshd@11-10.200.8.28:22-10.200.16.10:51380.service - OpenSSH per-connection server daemon (10.200.16.10:51380). Jan 22 00:46:50.841000 audit[6041]: USER_ACCT pid=6041 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:50.842227 sshd[6041]: Accepted publickey for core from 10.200.16.10 port 51380 ssh2: RSA SHA256:hQipMGMdtaSZ7b92HZmOgUPWWHTKhAP4uTxbuEjU9iU Jan 22 00:46:50.842000 audit[6041]: CRED_ACQ pid=6041 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:50.842000 audit[6041]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffede344030 a2=3 a3=0 items=0 ppid=1 pid=6041 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:46:50.842000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:46:50.843314 sshd-session[6041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:46:50.847824 systemd-logind[2425]: New session 14 of user core. Jan 22 00:46:50.852107 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 22 00:46:50.855000 audit[6041]: USER_START pid=6041 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:50.856000 audit[6044]: CRED_ACQ pid=6044 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:51.283001 sshd[6044]: Connection closed by 10.200.16.10 port 51380 Jan 22 00:46:51.284359 sshd-session[6041]: pam_unix(sshd:session): session closed for user core Jan 22 00:46:51.285000 audit[6041]: USER_END pid=6041 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:51.286000 audit[6041]: CRED_DISP pid=6041 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:51.289149 systemd-logind[2425]: Session 14 logged out. Waiting for processes to exit. Jan 22 00:46:51.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.8.28:22-10.200.16.10:51380 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:46:51.292246 systemd[1]: sshd@11-10.200.8.28:22-10.200.16.10:51380.service: Deactivated successfully. Jan 22 00:46:51.295557 systemd[1]: session-14.scope: Deactivated successfully. Jan 22 00:46:51.300403 systemd-logind[2425]: Removed session 14. Jan 22 00:46:56.186450 containerd[2460]: time="2026-01-22T00:46:56.186406465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 22 00:46:56.279786 waagent[2665]: 2026-01-22T00:46:56.279511Z INFO ExtHandler Jan 22 00:46:56.279786 waagent[2665]: 2026-01-22T00:46:56.279627Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 88d5ad44-9be1-4b16-9828-b5a16262755b eTag: 13612156458088704114 source: Fabric] Jan 22 00:46:56.280896 waagent[2665]: 2026-01-22T00:46:56.280833Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 22 00:46:56.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.28:22-10.200.16.10:51390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:46:56.408421 systemd[1]: Started sshd@12-10.200.8.28:22-10.200.16.10:51390.service - OpenSSH per-connection server daemon (10.200.16.10:51390). Jan 22 00:46:56.409834 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 22 00:46:56.410092 kernel: audit: type=1130 audit(1769042816.406:801): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.28:22-10.200.16.10:51390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:46:56.429944 containerd[2460]: time="2026-01-22T00:46:56.429879734Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:46:56.432941 containerd[2460]: time="2026-01-22T00:46:56.432902507Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 22 00:46:56.433028 containerd[2460]: time="2026-01-22T00:46:56.432927343Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 22 00:46:56.433134 kubelet[3924]: E0122 00:46:56.433099 3924 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 22 00:46:56.433465 kubelet[3924]: E0122 00:46:56.433390 3924 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 22 00:46:56.433759 kubelet[3924]: E0122 00:46:56.433710 3924 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:27c137ff87524ee5bb8d79863905fd63,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dckdv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-759c5b6477-kxt5n_calico-system(c7ca39d8-cd5c-4ad6-a84c-aac52a3306f1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 22 00:46:56.437629 containerd[2460]: time="2026-01-22T00:46:56.437555798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 22 00:46:56.686833 containerd[2460]: time="2026-01-22T00:46:56.686784765Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:46:56.690060 containerd[2460]: time="2026-01-22T00:46:56.689858189Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 22 00:46:56.690060 containerd[2460]: time="2026-01-22T00:46:56.689933234Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 22 00:46:56.690249 kubelet[3924]: E0122 00:46:56.690042 3924 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 22 00:46:56.690249 kubelet[3924]: E0122 00:46:56.690087 3924 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 22 00:46:56.690317 kubelet[3924]: E0122 00:46:56.690203 3924 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dckdv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-759c5b6477-kxt5n_calico-system(c7ca39d8-cd5c-4ad6-a84c-aac52a3306f1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 22 00:46:56.691694 kubelet[3924]: E0122 00:46:56.691591 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-759c5b6477-kxt5n" podUID="c7ca39d8-cd5c-4ad6-a84c-aac52a3306f1" Jan 22 00:46:56.984000 audit[6068]: USER_ACCT pid=6068 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:56.989426 sshd-session[6068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:46:56.990121 sshd[6068]: Accepted publickey for core from 10.200.16.10 port 51390 ssh2: RSA SHA256:hQipMGMdtaSZ7b92HZmOgUPWWHTKhAP4uTxbuEjU9iU Jan 22 00:46:56.990779 kernel: audit: type=1101 audit(1769042816.984:802): pid=6068 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:56.987000 audit[6068]: CRED_ACQ pid=6068 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:56.997967 systemd-logind[2425]: New session 15 of user core. Jan 22 00:46:56.999759 kernel: audit: type=1103 audit(1769042816.987:803): pid=6068 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:56.999994 kernel: audit: type=1006 audit(1769042816.987:804): pid=6068 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 22 00:46:57.006793 kernel: audit: type=1300 audit(1769042816.987:804): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe41a2a7f0 a2=3 a3=0 items=0 ppid=1 pid=6068 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:46:56.987000 audit[6068]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe41a2a7f0 a2=3 a3=0 items=0 ppid=1 pid=6068 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:46:57.009367 kernel: audit: type=1327 audit(1769042816.987:804): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:46:56.987000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:46:57.009637 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 22 00:46:57.011000 audit[6068]: USER_START pid=6068 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:57.018776 kernel: audit: type=1105 audit(1769042817.011:805): pid=6068 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:57.017000 audit[6071]: CRED_ACQ pid=6071 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:57.024780 kernel: audit: type=1103 audit(1769042817.017:806): pid=6071 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:57.186036 kubelet[3924]: E0122 00:46:57.185620 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68764f557f-l8tpm" podUID="dd066ec6-7b8c-4975-b067-940020b582cf" Jan 22 00:46:57.186036 kubelet[3924]: E0122 00:46:57.185996 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5555c47f4f-szgpn" podUID="1c96102f-5e79-4a6e-9dde-550f505c5961" Jan 22 00:46:57.384261 sshd[6071]: Connection closed by 10.200.16.10 port 51390 Jan 22 00:46:57.385922 sshd-session[6068]: pam_unix(sshd:session): session closed for user core Jan 22 00:46:57.396769 kernel: audit: type=1106 audit(1769042817.385:807): pid=6068 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:57.385000 audit[6068]: USER_END pid=6068 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:57.391111 systemd-logind[2425]: Session 15 logged out. Waiting for processes to exit. Jan 22 00:46:57.392005 systemd[1]: sshd@12-10.200.8.28:22-10.200.16.10:51390.service: Deactivated successfully. Jan 22 00:46:57.394418 systemd[1]: session-15.scope: Deactivated successfully. Jan 22 00:46:57.397641 systemd-logind[2425]: Removed session 15. Jan 22 00:46:57.385000 audit[6068]: CRED_DISP pid=6068 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:57.407810 kernel: audit: type=1104 audit(1769042817.385:808): pid=6068 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:46:57.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.28:22-10.200.16.10:51390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:46:59.188762 kubelet[3924]: E0122 00:46:59.188346 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-78h9h" podUID="222ac10e-a19c-48d7-ba2a-f1cdbf34cf86" Jan 22 00:47:00.185542 kubelet[3924]: E0122 00:47:00.185498 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pmkws" podUID="60deceea-d34a-4dce-b9f3-936e34e45689" Jan 22 00:47:02.186382 containerd[2460]: time="2026-01-22T00:47:02.185781216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 22 00:47:02.430758 containerd[2460]: time="2026-01-22T00:47:02.430692765Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:47:02.434271 containerd[2460]: time="2026-01-22T00:47:02.434138849Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 22 00:47:02.434413 containerd[2460]: time="2026-01-22T00:47:02.434395790Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 22 00:47:02.434548 kubelet[3924]: E0122 00:47:02.434505 3924 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:47:02.434878 kubelet[3924]: E0122 00:47:02.434563 3924 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:47:02.434878 kubelet[3924]: E0122 00:47:02.434703 3924 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tsqnq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68764f557f-r6nmq_calico-apiserver(77488da9-2016-4a7d-b29a-dee9cf79fa65): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 22 00:47:02.436219 kubelet[3924]: E0122 00:47:02.436181 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68764f557f-r6nmq" podUID="77488da9-2016-4a7d-b29a-dee9cf79fa65" Jan 22 00:47:02.512909 systemd[1]: Started sshd@13-10.200.8.28:22-10.200.16.10:48214.service - OpenSSH per-connection server daemon (10.200.16.10:48214). Jan 22 00:47:02.523453 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 22 00:47:02.523527 kernel: audit: type=1130 audit(1769042822.512:810): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.28:22-10.200.16.10:48214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:47:02.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.28:22-10.200.16.10:48214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:47:03.111000 audit[6116]: USER_ACCT pid=6116 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:03.115308 sshd-session[6116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:47:03.119197 sshd[6116]: Accepted publickey for core from 10.200.16.10 port 48214 ssh2: RSA SHA256:hQipMGMdtaSZ7b92HZmOgUPWWHTKhAP4uTxbuEjU9iU Jan 22 00:47:03.111000 audit[6116]: CRED_ACQ pid=6116 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:03.124473 systemd-logind[2425]: New session 16 of user core. Jan 22 00:47:03.128052 kernel: audit: type=1101 audit(1769042823.111:811): pid=6116 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:03.128117 kernel: audit: type=1103 audit(1769042823.111:812): pid=6116 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:03.141282 kernel: audit: type=1006 audit(1769042823.111:813): pid=6116 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 22 00:47:03.111000 audit[6116]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd8c3e3ee0 a2=3 a3=0 items=0 ppid=1 pid=6116 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:47:03.147432 kernel: audit: type=1300 audit(1769042823.111:813): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd8c3e3ee0 a2=3 a3=0 items=0 ppid=1 pid=6116 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:47:03.147944 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 22 00:47:03.111000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:47:03.152593 kernel: audit: type=1327 audit(1769042823.111:813): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:47:03.154000 audit[6116]: USER_START pid=6116 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:03.165904 kernel: audit: type=1105 audit(1769042823.154:814): pid=6116 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:03.164000 audit[6131]: CRED_ACQ pid=6131 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:03.174756 kernel: audit: type=1103 audit(1769042823.164:815): pid=6131 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:03.493637 sshd[6131]: Connection closed by 10.200.16.10 port 48214 Jan 22 00:47:03.493873 sshd-session[6116]: pam_unix(sshd:session): session closed for user core Jan 22 00:47:03.495000 audit[6116]: USER_END pid=6116 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:03.502765 kernel: audit: type=1106 audit(1769042823.495:816): pid=6116 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:03.502689 systemd[1]: sshd@13-10.200.8.28:22-10.200.16.10:48214.service: Deactivated successfully. Jan 22 00:47:03.504731 systemd[1]: session-16.scope: Deactivated successfully. Jan 22 00:47:03.496000 audit[6116]: CRED_DISP pid=6116 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:03.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.28:22-10.200.16.10:48214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:47:03.510767 kernel: audit: type=1104 audit(1769042823.496:817): pid=6116 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:03.512010 systemd-logind[2425]: Session 16 logged out. Waiting for processes to exit. Jan 22 00:47:03.515105 systemd-logind[2425]: Removed session 16. Jan 22 00:47:08.618009 systemd[1]: Started sshd@14-10.200.8.28:22-10.200.16.10:48216.service - OpenSSH per-connection server daemon (10.200.16.10:48216). Jan 22 00:47:08.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.28:22-10.200.16.10:48216 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:47:08.619856 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 22 00:47:08.619908 kernel: audit: type=1130 audit(1769042828.617:819): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.28:22-10.200.16.10:48216 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:47:09.186083 containerd[2460]: time="2026-01-22T00:47:09.186014390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 22 00:47:09.206000 audit[6150]: USER_ACCT pid=6150 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:09.208782 sshd-session[6150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:47:09.210142 sshd[6150]: Accepted publickey for core from 10.200.16.10 port 48216 ssh2: RSA SHA256:hQipMGMdtaSZ7b92HZmOgUPWWHTKhAP4uTxbuEjU9iU Jan 22 00:47:09.207000 audit[6150]: CRED_ACQ pid=6150 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:09.216660 systemd-logind[2425]: New session 17 of user core. Jan 22 00:47:09.219200 kernel: audit: type=1101 audit(1769042829.206:820): pid=6150 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:09.219260 kernel: audit: type=1103 audit(1769042829.207:821): pid=6150 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:09.224041 kernel: audit: type=1006 audit(1769042829.207:822): pid=6150 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 22 00:47:09.207000 audit[6150]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf48d8a20 a2=3 a3=0 items=0 ppid=1 pid=6150 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:47:09.225008 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 22 00:47:09.231404 kernel: audit: type=1300 audit(1769042829.207:822): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf48d8a20 a2=3 a3=0 items=0 ppid=1 pid=6150 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:47:09.207000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:47:09.235036 kernel: audit: type=1327 audit(1769042829.207:822): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:47:09.229000 audit[6150]: USER_START pid=6150 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:09.239963 kernel: audit: type=1105 audit(1769042829.229:823): pid=6150 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:09.231000 audit[6153]: CRED_ACQ pid=6153 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:09.245514 kernel: audit: type=1103 audit(1769042829.231:824): pid=6153 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:09.432013 containerd[2460]: time="2026-01-22T00:47:09.431942886Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:47:09.435652 containerd[2460]: time="2026-01-22T00:47:09.435606610Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 22 00:47:09.435949 containerd[2460]: time="2026-01-22T00:47:09.435712227Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 22 00:47:09.436021 kubelet[3924]: E0122 00:47:09.435874 3924 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:47:09.436021 kubelet[3924]: E0122 00:47:09.435922 3924 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 22 00:47:09.437178 kubelet[3924]: E0122 00:47:09.436060 3924 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f8jng,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68764f557f-l8tpm_calico-apiserver(dd066ec6-7b8c-4975-b067-940020b582cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 22 00:47:09.437565 kubelet[3924]: E0122 00:47:09.437524 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68764f557f-l8tpm" podUID="dd066ec6-7b8c-4975-b067-940020b582cf" Jan 22 00:47:09.590993 sshd[6153]: Connection closed by 10.200.16.10 port 48216 Jan 22 00:47:09.591707 sshd-session[6150]: pam_unix(sshd:session): session closed for user core Jan 22 00:47:09.592000 audit[6150]: USER_END pid=6150 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:09.596242 systemd-logind[2425]: Session 17 logged out. Waiting for processes to exit. Jan 22 00:47:09.597237 systemd[1]: sshd@14-10.200.8.28:22-10.200.16.10:48216.service: Deactivated successfully. Jan 22 00:47:09.599561 systemd[1]: session-17.scope: Deactivated successfully. Jan 22 00:47:09.602452 kernel: audit: type=1106 audit(1769042829.592:825): pid=6150 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:09.602507 kernel: audit: type=1104 audit(1769042829.592:826): pid=6150 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:09.592000 audit[6150]: CRED_DISP pid=6150 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:09.601814 systemd-logind[2425]: Removed session 17. Jan 22 00:47:09.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.28:22-10.200.16.10:48216 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:47:09.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.8.28:22-10.200.16.10:44516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:47:09.713873 systemd[1]: Started sshd@15-10.200.8.28:22-10.200.16.10:44516.service - OpenSSH per-connection server daemon (10.200.16.10:44516). Jan 22 00:47:10.288000 audit[6165]: USER_ACCT pid=6165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:10.289411 sshd[6165]: Accepted publickey for core from 10.200.16.10 port 44516 ssh2: RSA SHA256:hQipMGMdtaSZ7b92HZmOgUPWWHTKhAP4uTxbuEjU9iU Jan 22 00:47:10.290000 audit[6165]: CRED_ACQ pid=6165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:10.290000 audit[6165]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb2f97dd0 a2=3 a3=0 items=0 ppid=1 pid=6165 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:47:10.290000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:47:10.291777 sshd-session[6165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:47:10.298931 systemd-logind[2425]: New session 18 of user core. Jan 22 00:47:10.303930 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 22 00:47:10.307000 audit[6165]: USER_START pid=6165 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:10.310000 audit[6168]: CRED_ACQ pid=6168 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:10.734790 sshd[6168]: Connection closed by 10.200.16.10 port 44516 Jan 22 00:47:10.735127 sshd-session[6165]: pam_unix(sshd:session): session closed for user core Jan 22 00:47:10.735000 audit[6165]: USER_END pid=6165 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:10.736000 audit[6165]: CRED_DISP pid=6165 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:10.739285 systemd[1]: sshd@15-10.200.8.28:22-10.200.16.10:44516.service: Deactivated successfully. Jan 22 00:47:10.739770 systemd-logind[2425]: Session 18 logged out. Waiting for processes to exit. Jan 22 00:47:10.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.8.28:22-10.200.16.10:44516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:47:10.741571 systemd[1]: session-18.scope: Deactivated successfully. Jan 22 00:47:10.743205 systemd-logind[2425]: Removed session 18. Jan 22 00:47:10.869700 systemd[1]: Started sshd@16-10.200.8.28:22-10.200.16.10:44524.service - OpenSSH per-connection server daemon (10.200.16.10:44524). Jan 22 00:47:10.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.8.28:22-10.200.16.10:44524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:47:11.190328 containerd[2460]: time="2026-01-22T00:47:11.190087659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 22 00:47:11.430676 containerd[2460]: time="2026-01-22T00:47:11.430629012Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:47:11.433711 containerd[2460]: time="2026-01-22T00:47:11.433668125Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 22 00:47:11.433898 containerd[2460]: time="2026-01-22T00:47:11.433674671Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 22 00:47:11.433958 kubelet[3924]: E0122 00:47:11.433882 3924 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 22 00:47:11.433958 kubelet[3924]: E0122 00:47:11.433929 3924 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 22 00:47:11.434445 kubelet[3924]: E0122 00:47:11.434128 3924 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-98q7m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-pmkws_calico-system(60deceea-d34a-4dce-b9f3-936e34e45689): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 22 00:47:11.435092 containerd[2460]: time="2026-01-22T00:47:11.434745773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 22 00:47:11.436245 kubelet[3924]: E0122 00:47:11.436192 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pmkws" podUID="60deceea-d34a-4dce-b9f3-936e34e45689" Jan 22 00:47:11.449000 audit[6178]: USER_ACCT pid=6178 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:11.450632 sshd[6178]: Accepted publickey for core from 10.200.16.10 port 44524 ssh2: RSA SHA256:hQipMGMdtaSZ7b92HZmOgUPWWHTKhAP4uTxbuEjU9iU Jan 22 00:47:11.450000 audit[6178]: CRED_ACQ pid=6178 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:11.451000 audit[6178]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4940bd30 a2=3 a3=0 items=0 ppid=1 pid=6178 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:47:11.451000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:47:11.452398 sshd-session[6178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:47:11.458140 systemd-logind[2425]: New session 19 of user core. Jan 22 00:47:11.465066 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 22 00:47:11.467000 audit[6178]: USER_START pid=6178 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:11.470000 audit[6181]: CRED_ACQ pid=6181 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:11.676060 containerd[2460]: time="2026-01-22T00:47:11.676010091Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:47:11.678753 containerd[2460]: time="2026-01-22T00:47:11.678700953Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 22 00:47:11.678863 containerd[2460]: time="2026-01-22T00:47:11.678783080Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 22 00:47:11.679140 kubelet[3924]: E0122 00:47:11.679064 3924 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 22 00:47:11.679140 kubelet[3924]: E0122 00:47:11.679108 3924 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 22 00:47:11.679656 kubelet[3924]: E0122 00:47:11.679383 3924 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7h22j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5555c47f4f-szgpn_calico-system(1c96102f-5e79-4a6e-9dde-550f505c5961): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 22 00:47:11.680846 kubelet[3924]: E0122 00:47:11.680797 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5555c47f4f-szgpn" podUID="1c96102f-5e79-4a6e-9dde-550f505c5961" Jan 22 00:47:12.188365 kubelet[3924]: E0122 00:47:12.188127 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-759c5b6477-kxt5n" podUID="c7ca39d8-cd5c-4ad6-a84c-aac52a3306f1" Jan 22 00:47:12.219000 audit[6190]: NETFILTER_CFG table=filter:147 family=2 entries=26 op=nft_register_rule pid=6190 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:47:12.219000 audit[6190]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff22decdf0 a2=0 a3=7fff22decddc items=0 ppid=4073 pid=6190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:47:12.219000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:47:12.224000 audit[6190]: NETFILTER_CFG table=nat:148 family=2 entries=20 op=nft_register_rule pid=6190 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:47:12.224000 audit[6190]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff22decdf0 a2=0 a3=0 items=0 ppid=4073 pid=6190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:47:12.224000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:47:12.269000 audit[6192]: NETFILTER_CFG table=filter:149 family=2 entries=38 op=nft_register_rule pid=6192 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:47:12.269000 audit[6192]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fffba94c940 a2=0 a3=7fffba94c92c items=0 ppid=4073 pid=6192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:47:12.269000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:47:12.273000 audit[6192]: NETFILTER_CFG table=nat:150 family=2 entries=20 op=nft_register_rule pid=6192 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:47:12.273000 audit[6192]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffba94c940 a2=0 a3=0 items=0 ppid=4073 pid=6192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:47:12.273000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:47:12.337559 sshd[6181]: Connection closed by 10.200.16.10 port 44524 Jan 22 00:47:12.337486 sshd-session[6178]: pam_unix(sshd:session): session closed for user core Jan 22 00:47:12.339000 audit[6178]: USER_END pid=6178 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:12.340000 audit[6178]: CRED_DISP pid=6178 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:12.343440 systemd[1]: sshd@16-10.200.8.28:22-10.200.16.10:44524.service: Deactivated successfully. Jan 22 00:47:12.344184 systemd-logind[2425]: Session 19 logged out. Waiting for processes to exit. Jan 22 00:47:12.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.8.28:22-10.200.16.10:44524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:47:12.347558 systemd[1]: session-19.scope: Deactivated successfully. Jan 22 00:47:12.350903 systemd-logind[2425]: Removed session 19. Jan 22 00:47:12.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.8.28:22-10.200.16.10:44526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:47:12.460730 systemd[1]: Started sshd@17-10.200.8.28:22-10.200.16.10:44526.service - OpenSSH per-connection server daemon (10.200.16.10:44526). Jan 22 00:47:13.062000 audit[6197]: USER_ACCT pid=6197 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:13.063425 sshd[6197]: Accepted publickey for core from 10.200.16.10 port 44526 ssh2: RSA SHA256:hQipMGMdtaSZ7b92HZmOgUPWWHTKhAP4uTxbuEjU9iU Jan 22 00:47:13.063000 audit[6197]: CRED_ACQ pid=6197 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:13.063000 audit[6197]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe7749acc0 a2=3 a3=0 items=0 ppid=1 pid=6197 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:47:13.063000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:47:13.064514 sshd-session[6197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:47:13.069288 systemd-logind[2425]: New session 20 of user core. Jan 22 00:47:13.073941 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 22 00:47:13.075000 audit[6197]: USER_START pid=6197 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:13.076000 audit[6200]: CRED_ACQ pid=6200 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:13.187954 containerd[2460]: time="2026-01-22T00:47:13.187420545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 22 00:47:13.435716 containerd[2460]: time="2026-01-22T00:47:13.435615758Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:47:13.438935 containerd[2460]: time="2026-01-22T00:47:13.438898494Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 22 00:47:13.439134 containerd[2460]: time="2026-01-22T00:47:13.439048034Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 22 00:47:13.439326 kubelet[3924]: E0122 00:47:13.439284 3924 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 22 00:47:13.440976 kubelet[3924]: E0122 00:47:13.439638 3924 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 22 00:47:13.441361 kubelet[3924]: E0122 00:47:13.441313 3924 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bdmdn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-78h9h_calico-system(222ac10e-a19c-48d7-ba2a-f1cdbf34cf86): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 22 00:47:13.444766 containerd[2460]: time="2026-01-22T00:47:13.443850108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 22 00:47:13.627672 sshd[6200]: Connection closed by 10.200.16.10 port 44526 Jan 22 00:47:13.628918 sshd-session[6197]: pam_unix(sshd:session): session closed for user core Jan 22 00:47:13.638783 kernel: kauditd_printk_skb: 43 callbacks suppressed Jan 22 00:47:13.638874 kernel: audit: type=1106 audit(1769042833.628:856): pid=6197 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:13.628000 audit[6197]: USER_END pid=6197 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:13.639025 systemd-logind[2425]: Session 20 logged out. Waiting for processes to exit. Jan 22 00:47:13.628000 audit[6197]: CRED_DISP pid=6197 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:13.645102 systemd[1]: sshd@17-10.200.8.28:22-10.200.16.10:44526.service: Deactivated successfully. Jan 22 00:47:13.651606 kernel: audit: type=1104 audit(1769042833.628:857): pid=6197 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:13.651679 kernel: audit: type=1131 audit(1769042833.643:858): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.8.28:22-10.200.16.10:44526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:47:13.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.8.28:22-10.200.16.10:44526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:47:13.650029 systemd[1]: session-20.scope: Deactivated successfully. Jan 22 00:47:13.653180 systemd-logind[2425]: Removed session 20. Jan 22 00:47:13.688410 containerd[2460]: time="2026-01-22T00:47:13.688321615Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 22 00:47:13.692062 containerd[2460]: time="2026-01-22T00:47:13.692024115Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 22 00:47:13.692283 containerd[2460]: time="2026-01-22T00:47:13.692154969Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 22 00:47:13.692523 kubelet[3924]: E0122 00:47:13.692491 3924 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 22 00:47:13.692650 kubelet[3924]: E0122 00:47:13.692629 3924 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 22 00:47:13.692863 kubelet[3924]: E0122 00:47:13.692829 3924 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bdmdn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-78h9h_calico-system(222ac10e-a19c-48d7-ba2a-f1cdbf34cf86): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 22 00:47:13.694379 kubelet[3924]: E0122 00:47:13.694332 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-78h9h" podUID="222ac10e-a19c-48d7-ba2a-f1cdbf34cf86" Jan 22 00:47:13.751417 systemd[1]: Started sshd@18-10.200.8.28:22-10.200.16.10:44538.service - OpenSSH per-connection server daemon (10.200.16.10:44538). Jan 22 00:47:13.757756 kernel: audit: type=1130 audit(1769042833.749:859): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.28:22-10.200.16.10:44538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:47:13.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.28:22-10.200.16.10:44538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:47:14.187850 kubelet[3924]: E0122 00:47:14.186725 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68764f557f-r6nmq" podUID="77488da9-2016-4a7d-b29a-dee9cf79fa65" Jan 22 00:47:14.331000 audit[6210]: USER_ACCT pid=6210 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:14.335394 sshd-session[6210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:47:14.337179 sshd[6210]: Accepted publickey for core from 10.200.16.10 port 44538 ssh2: RSA SHA256:hQipMGMdtaSZ7b92HZmOgUPWWHTKhAP4uTxbuEjU9iU Jan 22 00:47:14.333000 audit[6210]: CRED_ACQ pid=6210 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:14.346582 kernel: audit: type=1101 audit(1769042834.331:860): pid=6210 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:14.346681 kernel: audit: type=1103 audit(1769042834.333:861): pid=6210 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:14.351809 kernel: audit: type=1006 audit(1769042834.333:862): pid=6210 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 22 00:47:14.359692 kernel: audit: type=1300 audit(1769042834.333:862): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc55517340 a2=3 a3=0 items=0 ppid=1 pid=6210 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:47:14.333000 audit[6210]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc55517340 a2=3 a3=0 items=0 ppid=1 pid=6210 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:47:14.353601 systemd-logind[2425]: New session 21 of user core. Jan 22 00:47:14.333000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:47:14.362755 kernel: audit: type=1327 audit(1769042834.333:862): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:47:14.363016 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 22 00:47:14.364000 audit[6210]: USER_START pid=6210 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:14.366000 audit[6213]: CRED_ACQ pid=6213 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:14.374326 kernel: audit: type=1105 audit(1769042834.364:863): pid=6210 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:14.714282 sshd[6213]: Connection closed by 10.200.16.10 port 44538 Jan 22 00:47:14.714929 sshd-session[6210]: pam_unix(sshd:session): session closed for user core Jan 22 00:47:14.714000 audit[6210]: USER_END pid=6210 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:14.714000 audit[6210]: CRED_DISP pid=6210 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:14.718311 systemd[1]: sshd@18-10.200.8.28:22-10.200.16.10:44538.service: Deactivated successfully. Jan 22 00:47:14.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.28:22-10.200.16.10:44538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:47:14.720359 systemd[1]: session-21.scope: Deactivated successfully. Jan 22 00:47:14.721289 systemd-logind[2425]: Session 21 logged out. Waiting for processes to exit. Jan 22 00:47:14.722997 systemd-logind[2425]: Removed session 21. Jan 22 00:47:17.133000 audit[6225]: NETFILTER_CFG table=filter:151 family=2 entries=26 op=nft_register_rule pid=6225 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:47:17.133000 audit[6225]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd7dd337b0 a2=0 a3=7ffd7dd3379c items=0 ppid=4073 pid=6225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:47:17.133000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:47:17.139000 audit[6225]: NETFILTER_CFG table=nat:152 family=2 entries=104 op=nft_register_chain pid=6225 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 22 00:47:17.139000 audit[6225]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffd7dd337b0 a2=0 a3=7ffd7dd3379c items=0 ppid=4073 pid=6225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:47:17.139000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 22 00:47:19.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.28:22-10.200.16.10:60900 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:47:19.837022 systemd[1]: Started sshd@19-10.200.8.28:22-10.200.16.10:60900.service - OpenSSH per-connection server daemon (10.200.16.10:60900). Jan 22 00:47:19.839045 kernel: kauditd_printk_skb: 10 callbacks suppressed Jan 22 00:47:19.839284 kernel: audit: type=1130 audit(1769042839.836:870): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.28:22-10.200.16.10:60900 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:47:20.425000 audit[6227]: USER_ACCT pid=6227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:20.427542 sshd-session[6227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:47:20.428227 sshd[6227]: Accepted publickey for core from 10.200.16.10 port 60900 ssh2: RSA SHA256:hQipMGMdtaSZ7b92HZmOgUPWWHTKhAP4uTxbuEjU9iU Jan 22 00:47:20.435911 kernel: audit: type=1101 audit(1769042840.425:871): pid=6227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:20.436008 kernel: audit: type=1103 audit(1769042840.426:872): pid=6227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:20.426000 audit[6227]: CRED_ACQ pid=6227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:20.439503 kernel: audit: type=1006 audit(1769042840.426:873): pid=6227 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 22 00:47:20.426000 audit[6227]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff0be53730 a2=3 a3=0 items=0 ppid=1 pid=6227 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:47:20.444568 kernel: audit: type=1300 audit(1769042840.426:873): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff0be53730 a2=3 a3=0 items=0 ppid=1 pid=6227 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:47:20.444868 systemd-logind[2425]: New session 22 of user core. Jan 22 00:47:20.447019 kernel: audit: type=1327 audit(1769042840.426:873): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:47:20.426000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:47:20.449983 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 22 00:47:20.451000 audit[6227]: USER_START pid=6227 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:20.456000 audit[6230]: CRED_ACQ pid=6230 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:20.460094 kernel: audit: type=1105 audit(1769042840.451:874): pid=6227 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:20.460150 kernel: audit: type=1103 audit(1769042840.456:875): pid=6230 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:20.801061 sshd[6230]: Connection closed by 10.200.16.10 port 60900 Jan 22 00:47:20.801291 sshd-session[6227]: pam_unix(sshd:session): session closed for user core Jan 22 00:47:20.802000 audit[6227]: USER_END pid=6227 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:20.803000 audit[6227]: CRED_DISP pid=6227 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:20.809417 systemd[1]: sshd@19-10.200.8.28:22-10.200.16.10:60900.service: Deactivated successfully. Jan 22 00:47:20.812066 kernel: audit: type=1106 audit(1769042840.802:876): pid=6227 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:20.812125 kernel: audit: type=1104 audit(1769042840.803:877): pid=6227 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:20.813838 systemd[1]: session-22.scope: Deactivated successfully. Jan 22 00:47:20.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.28:22-10.200.16.10:60900 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:47:20.815651 systemd-logind[2425]: Session 22 logged out. Waiting for processes to exit. Jan 22 00:47:20.816869 systemd-logind[2425]: Removed session 22. Jan 22 00:47:21.187828 kubelet[3924]: E0122 00:47:21.187308 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68764f557f-l8tpm" podUID="dd066ec6-7b8c-4975-b067-940020b582cf" Jan 22 00:47:23.187253 kubelet[3924]: E0122 00:47:23.186789 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pmkws" podUID="60deceea-d34a-4dce-b9f3-936e34e45689" Jan 22 00:47:24.187072 kubelet[3924]: E0122 00:47:24.187024 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-759c5b6477-kxt5n" podUID="c7ca39d8-cd5c-4ad6-a84c-aac52a3306f1" Jan 22 00:47:25.925802 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 22 00:47:25.925921 kernel: audit: type=1130 audit(1769042845.918:879): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.28:22-10.200.16.10:60910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:47:25.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.28:22-10.200.16.10:60910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:47:25.919433 systemd[1]: Started sshd@20-10.200.8.28:22-10.200.16.10:60910.service - OpenSSH per-connection server daemon (10.200.16.10:60910). Jan 22 00:47:26.186675 kubelet[3924]: E0122 00:47:26.186304 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5555c47f4f-szgpn" podUID="1c96102f-5e79-4a6e-9dde-550f505c5961" Jan 22 00:47:26.495000 audit[6242]: USER_ACCT pid=6242 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:26.499625 sshd[6242]: Accepted publickey for core from 10.200.16.10 port 60910 ssh2: RSA SHA256:hQipMGMdtaSZ7b92HZmOgUPWWHTKhAP4uTxbuEjU9iU Jan 22 00:47:26.502251 sshd-session[6242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:47:26.502766 kernel: audit: type=1101 audit(1769042846.495:880): pid=6242 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:26.501000 audit[6242]: CRED_ACQ pid=6242 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:26.509754 kernel: audit: type=1103 audit(1769042846.501:881): pid=6242 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:26.515765 kernel: audit: type=1006 audit(1769042846.501:882): pid=6242 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 22 00:47:26.501000 audit[6242]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffebc417570 a2=3 a3=0 items=0 ppid=1 pid=6242 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:47:26.516666 systemd-logind[2425]: New session 23 of user core. Jan 22 00:47:26.522315 kernel: audit: type=1300 audit(1769042846.501:882): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffebc417570 a2=3 a3=0 items=0 ppid=1 pid=6242 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:47:26.501000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:47:26.524753 kernel: audit: type=1327 audit(1769042846.501:882): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:47:26.524980 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 22 00:47:26.528000 audit[6242]: USER_START pid=6242 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:26.534815 kernel: audit: type=1105 audit(1769042846.528:883): pid=6242 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:26.534000 audit[6245]: CRED_ACQ pid=6245 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:26.540759 kernel: audit: type=1103 audit(1769042846.534:884): pid=6245 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:26.894590 sshd[6245]: Connection closed by 10.200.16.10 port 60910 Jan 22 00:47:26.895287 sshd-session[6242]: pam_unix(sshd:session): session closed for user core Jan 22 00:47:26.895000 audit[6242]: USER_END pid=6242 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:26.899107 systemd[1]: sshd@20-10.200.8.28:22-10.200.16.10:60910.service: Deactivated successfully. Jan 22 00:47:26.901868 systemd[1]: session-23.scope: Deactivated successfully. Jan 22 00:47:26.905281 systemd-logind[2425]: Session 23 logged out. Waiting for processes to exit. Jan 22 00:47:26.906547 systemd-logind[2425]: Removed session 23. Jan 22 00:47:26.907045 kernel: audit: type=1106 audit(1769042846.895:885): pid=6242 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:26.895000 audit[6242]: CRED_DISP pid=6242 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:26.912762 kernel: audit: type=1104 audit(1769042846.895:886): pid=6242 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:26.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.28:22-10.200.16.10:60910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:47:28.185883 kubelet[3924]: E0122 00:47:28.185831 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-78h9h" podUID="222ac10e-a19c-48d7-ba2a-f1cdbf34cf86" Jan 22 00:47:29.186813 kubelet[3924]: E0122 00:47:29.186196 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68764f557f-r6nmq" podUID="77488da9-2016-4a7d-b29a-dee9cf79fa65" Jan 22 00:47:32.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.28:22-10.200.16.10:45072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:47:32.015942 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 22 00:47:32.015975 kernel: audit: type=1130 audit(1769042852.013:888): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.28:22-10.200.16.10:45072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:47:32.014072 systemd[1]: Started sshd@21-10.200.8.28:22-10.200.16.10:45072.service - OpenSSH per-connection server daemon (10.200.16.10:45072). Jan 22 00:47:32.186033 kubelet[3924]: E0122 00:47:32.185987 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68764f557f-l8tpm" podUID="dd066ec6-7b8c-4975-b067-940020b582cf" Jan 22 00:47:32.590000 audit[6280]: USER_ACCT pid=6280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:32.601609 kernel: audit: type=1101 audit(1769042852.590:889): pid=6280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:32.601698 sshd[6280]: Accepted publickey for core from 10.200.16.10 port 45072 ssh2: RSA SHA256:hQipMGMdtaSZ7b92HZmOgUPWWHTKhAP4uTxbuEjU9iU Jan 22 00:47:32.603399 sshd-session[6280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:47:32.615850 kernel: audit: type=1103 audit(1769042852.601:890): pid=6280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:32.601000 audit[6280]: CRED_ACQ pid=6280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:32.621467 systemd-logind[2425]: New session 24 of user core. Jan 22 00:47:32.624753 kernel: audit: type=1006 audit(1769042852.601:891): pid=6280 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 22 00:47:32.601000 audit[6280]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc6b719660 a2=3 a3=0 items=0 ppid=1 pid=6280 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:47:32.633171 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 22 00:47:32.635962 kernel: audit: type=1300 audit(1769042852.601:891): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc6b719660 a2=3 a3=0 items=0 ppid=1 pid=6280 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:47:32.601000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:47:32.642774 kernel: audit: type=1327 audit(1769042852.601:891): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:47:32.636000 audit[6280]: USER_START pid=6280 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:32.653761 kernel: audit: type=1105 audit(1769042852.636:892): pid=6280 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:32.641000 audit[6283]: CRED_ACQ pid=6283 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:32.664756 kernel: audit: type=1103 audit(1769042852.641:893): pid=6283 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:32.992031 sshd[6283]: Connection closed by 10.200.16.10 port 45072 Jan 22 00:47:32.992888 sshd-session[6280]: pam_unix(sshd:session): session closed for user core Jan 22 00:47:32.993000 audit[6280]: USER_END pid=6280 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:32.999479 systemd[1]: sshd@21-10.200.8.28:22-10.200.16.10:45072.service: Deactivated successfully. Jan 22 00:47:33.002306 systemd[1]: session-24.scope: Deactivated successfully. Jan 22 00:47:33.004881 kernel: audit: type=1106 audit(1769042852.993:894): pid=6280 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:33.004546 systemd-logind[2425]: Session 24 logged out. Waiting for processes to exit. Jan 22 00:47:33.008386 systemd-logind[2425]: Removed session 24. Jan 22 00:47:32.993000 audit[6280]: CRED_DISP pid=6280 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:33.017862 kernel: audit: type=1104 audit(1769042852.993:895): pid=6280 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:32.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.28:22-10.200.16.10:45072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:47:35.186785 kubelet[3924]: E0122 00:47:35.186721 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-759c5b6477-kxt5n" podUID="c7ca39d8-cd5c-4ad6-a84c-aac52a3306f1" Jan 22 00:47:38.123143 systemd[1]: Started sshd@22-10.200.8.28:22-10.200.16.10:45076.service - OpenSSH per-connection server daemon (10.200.16.10:45076). Jan 22 00:47:38.129772 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 22 00:47:38.129841 kernel: audit: type=1130 audit(1769042858.122:897): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.28:22-10.200.16.10:45076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:47:38.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.28:22-10.200.16.10:45076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:47:38.185717 kubelet[3924]: E0122 00:47:38.185664 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pmkws" podUID="60deceea-d34a-4dce-b9f3-936e34e45689" Jan 22 00:47:38.720702 sshd[6295]: Accepted publickey for core from 10.200.16.10 port 45076 ssh2: RSA SHA256:hQipMGMdtaSZ7b92HZmOgUPWWHTKhAP4uTxbuEjU9iU Jan 22 00:47:38.719000 audit[6295]: USER_ACCT pid=6295 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:38.730247 sshd-session[6295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:47:38.731098 kernel: audit: type=1101 audit(1769042858.719:898): pid=6295 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:38.736784 systemd-logind[2425]: New session 25 of user core. Jan 22 00:47:38.729000 audit[6295]: CRED_ACQ pid=6295 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:38.748762 kernel: audit: type=1103 audit(1769042858.729:899): pid=6295 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:38.748964 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 22 00:47:38.758770 kernel: audit: type=1006 audit(1769042858.729:900): pid=6295 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 22 00:47:38.729000 audit[6295]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff8c341230 a2=3 a3=0 items=0 ppid=1 pid=6295 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:47:38.767757 kernel: audit: type=1300 audit(1769042858.729:900): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff8c341230 a2=3 a3=0 items=0 ppid=1 pid=6295 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:47:38.729000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:47:38.775064 kernel: audit: type=1327 audit(1769042858.729:900): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:47:38.775128 kernel: audit: type=1105 audit(1769042858.752:901): pid=6295 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:38.752000 audit[6295]: USER_START pid=6295 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:38.752000 audit[6298]: CRED_ACQ pid=6298 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:38.792753 kernel: audit: type=1103 audit(1769042858.752:902): pid=6298 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:39.126766 sshd[6298]: Connection closed by 10.200.16.10 port 45076 Jan 22 00:47:39.127351 sshd-session[6295]: pam_unix(sshd:session): session closed for user core Jan 22 00:47:39.129000 audit[6295]: USER_END pid=6295 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:39.138890 kernel: audit: type=1106 audit(1769042859.129:903): pid=6295 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:39.138000 audit[6295]: CRED_DISP pid=6295 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:39.141901 systemd[1]: sshd@22-10.200.8.28:22-10.200.16.10:45076.service: Deactivated successfully. Jan 22 00:47:39.147022 systemd[1]: session-25.scope: Deactivated successfully. Jan 22 00:47:39.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.28:22-10.200.16.10:45076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:47:39.149786 kernel: audit: type=1104 audit(1769042859.138:904): pid=6295 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:39.151652 systemd-logind[2425]: Session 25 logged out. Waiting for processes to exit. Jan 22 00:47:39.154042 systemd-logind[2425]: Removed session 25. Jan 22 00:47:40.185688 kubelet[3924]: E0122 00:47:40.185379 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5555c47f4f-szgpn" podUID="1c96102f-5e79-4a6e-9dde-550f505c5961" Jan 22 00:47:41.185643 kubelet[3924]: E0122 00:47:41.185576 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68764f557f-r6nmq" podUID="77488da9-2016-4a7d-b29a-dee9cf79fa65" Jan 22 00:47:42.186327 kubelet[3924]: E0122 00:47:42.186255 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-78h9h" podUID="222ac10e-a19c-48d7-ba2a-f1cdbf34cf86" Jan 22 00:47:44.246581 systemd[1]: Started sshd@23-10.200.8.28:22-10.200.16.10:57152.service - OpenSSH per-connection server daemon (10.200.16.10:57152). Jan 22 00:47:44.256072 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 22 00:47:44.256164 kernel: audit: type=1130 audit(1769042864.246:906): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.28:22-10.200.16.10:57152 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:47:44.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.28:22-10.200.16.10:57152 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:47:44.829000 audit[6310]: USER_ACCT pid=6310 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:44.831875 sshd[6310]: Accepted publickey for core from 10.200.16.10 port 57152 ssh2: RSA SHA256:hQipMGMdtaSZ7b92HZmOgUPWWHTKhAP4uTxbuEjU9iU Jan 22 00:47:44.834704 sshd-session[6310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 22 00:47:44.833000 audit[6310]: CRED_ACQ pid=6310 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:44.838139 kernel: audit: type=1101 audit(1769042864.829:907): pid=6310 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:44.838204 kernel: audit: type=1103 audit(1769042864.833:908): pid=6310 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:44.842254 kernel: audit: type=1006 audit(1769042864.833:909): pid=6310 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 22 00:47:44.847708 kernel: audit: type=1300 audit(1769042864.833:909): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4ccea330 a2=3 a3=0 items=0 ppid=1 pid=6310 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:47:44.833000 audit[6310]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4ccea330 a2=3 a3=0 items=0 ppid=1 pid=6310 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 22 00:47:44.847941 systemd-logind[2425]: New session 26 of user core. Jan 22 00:47:44.833000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:47:44.850764 kernel: audit: type=1327 audit(1769042864.833:909): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 22 00:47:44.855944 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 22 00:47:44.857000 audit[6310]: USER_START pid=6310 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:44.860000 audit[6313]: CRED_ACQ pid=6313 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:44.868183 kernel: audit: type=1105 audit(1769042864.857:910): pid=6310 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:44.868222 kernel: audit: type=1103 audit(1769042864.860:911): pid=6313 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:45.216695 sshd[6313]: Connection closed by 10.200.16.10 port 57152 Jan 22 00:47:45.219216 sshd-session[6310]: pam_unix(sshd:session): session closed for user core Jan 22 00:47:45.219000 audit[6310]: USER_END pid=6310 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:45.225110 systemd[1]: sshd@23-10.200.8.28:22-10.200.16.10:57152.service: Deactivated successfully. Jan 22 00:47:45.229774 kernel: audit: type=1106 audit(1769042865.219:912): pid=6310 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:45.219000 audit[6310]: CRED_DISP pid=6310 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:45.231010 systemd[1]: session-26.scope: Deactivated successfully. Jan 22 00:47:45.237881 kernel: audit: type=1104 audit(1769042865.219:913): pid=6310 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 22 00:47:45.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.28:22-10.200.16.10:57152 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 22 00:47:45.240451 systemd-logind[2425]: Session 26 logged out. Waiting for processes to exit. Jan 22 00:47:45.242116 systemd-logind[2425]: Removed session 26. Jan 22 00:47:46.185150 kubelet[3924]: E0122 00:47:46.185104 3924 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68764f557f-l8tpm" podUID="dd066ec6-7b8c-4975-b067-940020b582cf"