Jan 14 01:19:11.362827 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 13 22:26:24 -00 2026 Jan 14 01:19:11.362852 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ef461ed71f713584f576c99df12ffb04dd99b33cd2d16edeb307d0cf2f5b4260 Jan 14 01:19:11.362864 kernel: BIOS-provided physical RAM map: Jan 14 01:19:11.362872 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 14 01:19:11.362879 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 14 01:19:11.362886 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Jan 14 01:19:11.362895 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Jan 14 01:19:11.362903 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Jan 14 01:19:11.362909 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Jan 14 01:19:11.362918 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 14 01:19:11.362925 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 14 01:19:11.362931 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 14 01:19:11.362938 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 14 01:19:11.362945 kernel: printk: legacy bootconsole [earlyser0] enabled Jan 14 01:19:11.362955 kernel: NX (Execute Disable) protection: active Jan 14 01:19:11.362964 kernel: APIC: Static calls initialized Jan 14 01:19:11.362972 kernel: efi: EFI v2.7 by Microsoft Jan 14 01:19:11.362980 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3eaa2018 RNG=0x3ffd2018 Jan 14 01:19:11.362987 kernel: random: crng init done Jan 14 01:19:11.362994 kernel: secureboot: Secure boot disabled Jan 14 01:19:11.363002 kernel: SMBIOS 3.1.0 present. Jan 14 01:19:11.363009 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/25/2025 Jan 14 01:19:11.363016 kernel: DMI: Memory slots populated: 2/2 Jan 14 01:19:11.363024 kernel: Hypervisor detected: Microsoft Hyper-V Jan 14 01:19:11.363032 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Jan 14 01:19:11.363041 kernel: Hyper-V: Nested features: 0x3e0101 Jan 14 01:19:11.363049 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 14 01:19:11.363056 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 14 01:19:11.363064 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 01:19:11.363071 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 01:19:11.363078 kernel: tsc: Detected 2299.999 MHz processor Jan 14 01:19:11.363085 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 01:19:11.363095 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 01:19:11.363103 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Jan 14 01:19:11.363114 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 14 01:19:11.363122 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 01:19:11.363130 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Jan 14 01:19:11.363138 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Jan 14 01:19:11.363146 kernel: Using GB pages for direct mapping Jan 14 01:19:11.363153 kernel: ACPI: Early table checksum verification disabled Jan 14 01:19:11.363166 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 14 01:19:11.363175 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 01:19:11.363184 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 01:19:11.363192 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 14 01:19:11.363201 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 14 01:19:11.363209 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 01:19:11.363218 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 01:19:11.363226 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 01:19:11.363234 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Jan 14 01:19:11.363243 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Jan 14 01:19:11.363252 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 01:19:11.363261 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 14 01:19:11.363271 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Jan 14 01:19:11.363279 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 14 01:19:11.363287 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 14 01:19:11.363295 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 14 01:19:11.363303 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 14 01:19:11.363311 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 14 01:19:11.363320 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Jan 14 01:19:11.363330 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 14 01:19:11.363339 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 14 01:19:11.363348 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Jan 14 01:19:11.363356 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Jan 14 01:19:11.363364 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Jan 14 01:19:11.363372 kernel: Zone ranges: Jan 14 01:19:11.363380 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 01:19:11.363390 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 14 01:19:11.363398 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 01:19:11.363407 kernel: Device empty Jan 14 01:19:11.363415 kernel: Movable zone start for each node Jan 14 01:19:11.363424 kernel: Early memory node ranges Jan 14 01:19:11.363432 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 14 01:19:11.363440 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Jan 14 01:19:11.363449 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Jan 14 01:19:11.363457 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 14 01:19:11.363466 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 01:19:11.363474 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 14 01:19:11.363494 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 01:19:11.363504 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 14 01:19:11.363511 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 14 01:19:11.363521 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Jan 14 01:19:11.363529 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 14 01:19:11.363537 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 14 01:19:11.363546 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 14 01:19:11.363554 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 01:19:11.363563 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 01:19:11.363572 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 14 01:19:11.363580 kernel: TSC deadline timer available Jan 14 01:19:11.363589 kernel: CPU topo: Max. logical packages: 1 Jan 14 01:19:11.363597 kernel: CPU topo: Max. logical dies: 1 Jan 14 01:19:11.363605 kernel: CPU topo: Max. dies per package: 1 Jan 14 01:19:11.363613 kernel: CPU topo: Max. threads per core: 2 Jan 14 01:19:11.363622 kernel: CPU topo: Num. cores per package: 1 Jan 14 01:19:11.363630 kernel: CPU topo: Num. threads per package: 2 Jan 14 01:19:11.363639 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 14 01:19:11.363649 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 14 01:19:11.363657 kernel: Booting paravirtualized kernel on Hyper-V Jan 14 01:19:11.363665 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 01:19:11.363674 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 14 01:19:11.363682 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 14 01:19:11.363690 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 14 01:19:11.363699 kernel: pcpu-alloc: [0] 0 1 Jan 14 01:19:11.363709 kernel: Hyper-V: PV spinlocks enabled Jan 14 01:19:11.363718 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 01:19:11.363727 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ef461ed71f713584f576c99df12ffb04dd99b33cd2d16edeb307d0cf2f5b4260 Jan 14 01:19:11.363736 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 14 01:19:11.363744 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 01:19:11.363752 kernel: Fallback order for Node 0: 0 Jan 14 01:19:11.363762 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Jan 14 01:19:11.363769 kernel: Policy zone: Normal Jan 14 01:19:11.363778 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 01:19:11.363785 kernel: software IO TLB: area num 2. Jan 14 01:19:11.363793 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 14 01:19:11.363801 kernel: ftrace: allocating 40128 entries in 157 pages Jan 14 01:19:11.363809 kernel: ftrace: allocated 157 pages with 5 groups Jan 14 01:19:11.363818 kernel: Dynamic Preempt: voluntary Jan 14 01:19:11.363828 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 01:19:11.363841 kernel: rcu: RCU event tracing is enabled. Jan 14 01:19:11.363855 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 14 01:19:11.363865 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 01:19:11.363874 kernel: Rude variant of Tasks RCU enabled. Jan 14 01:19:11.363884 kernel: Tracing variant of Tasks RCU enabled. Jan 14 01:19:11.363895 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 01:19:11.363906 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 14 01:19:11.363917 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 01:19:11.363930 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 01:19:11.363940 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 01:19:11.363950 kernel: Using NULL legacy PIC Jan 14 01:19:11.363960 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 14 01:19:11.363973 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 01:19:11.363983 kernel: Console: colour dummy device 80x25 Jan 14 01:19:11.363994 kernel: printk: legacy console [tty1] enabled Jan 14 01:19:11.364005 kernel: printk: legacy console [ttyS0] enabled Jan 14 01:19:11.364015 kernel: printk: legacy bootconsole [earlyser0] disabled Jan 14 01:19:11.364025 kernel: ACPI: Core revision 20240827 Jan 14 01:19:11.364033 kernel: Failed to register legacy timer interrupt Jan 14 01:19:11.364042 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 01:19:11.364051 kernel: x2apic enabled Jan 14 01:19:11.364059 kernel: APIC: Switched APIC routing to: physical x2apic Jan 14 01:19:11.364067 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 14 01:19:11.364076 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 14 01:19:11.364083 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Jan 14 01:19:11.364091 kernel: Hyper-V: Using IPI hypercalls Jan 14 01:19:11.364100 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 14 01:19:11.364109 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 14 01:19:11.364118 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 14 01:19:11.364126 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 14 01:19:11.364135 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 14 01:19:11.364143 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 14 01:19:11.364151 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Jan 14 01:19:11.364160 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4599.99 BogoMIPS (lpj=2299999) Jan 14 01:19:11.364168 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 14 01:19:11.364175 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 14 01:19:11.364183 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 14 01:19:11.364191 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 01:19:11.364199 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 01:19:11.364207 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 14 01:19:11.364215 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 14 01:19:11.364225 kernel: RETBleed: Vulnerable Jan 14 01:19:11.364232 kernel: Speculative Store Bypass: Vulnerable Jan 14 01:19:11.364239 kernel: active return thunk: its_return_thunk Jan 14 01:19:11.364247 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 14 01:19:11.364254 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 01:19:11.364262 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 01:19:11.364270 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 01:19:11.364279 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 14 01:19:11.364287 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 14 01:19:11.364294 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 14 01:19:11.364303 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Jan 14 01:19:11.364311 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Jan 14 01:19:11.364318 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Jan 14 01:19:11.364327 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 01:19:11.364335 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 14 01:19:11.364343 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 14 01:19:11.364351 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 14 01:19:11.364358 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Jan 14 01:19:11.364365 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Jan 14 01:19:11.364373 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Jan 14 01:19:11.364380 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Jan 14 01:19:11.364390 kernel: Freeing SMP alternatives memory: 32K Jan 14 01:19:11.364398 kernel: pid_max: default: 32768 minimum: 301 Jan 14 01:19:11.364406 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 14 01:19:11.364414 kernel: landlock: Up and running. Jan 14 01:19:11.364421 kernel: SELinux: Initializing. Jan 14 01:19:11.364429 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 01:19:11.364437 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 01:19:11.364444 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Jan 14 01:19:11.364452 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Jan 14 01:19:11.364461 kernel: signal: max sigframe size: 11952 Jan 14 01:19:11.364471 kernel: rcu: Hierarchical SRCU implementation. Jan 14 01:19:11.364480 kernel: rcu: Max phase no-delay instances is 400. Jan 14 01:19:11.364496 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 14 01:19:11.364504 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 01:19:11.364512 kernel: smp: Bringing up secondary CPUs ... Jan 14 01:19:11.364521 kernel: smpboot: x86: Booting SMP configuration: Jan 14 01:19:11.364548 kernel: .... node #0, CPUs: #1 Jan 14 01:19:11.364557 kernel: smp: Brought up 1 node, 2 CPUs Jan 14 01:19:11.364566 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 14 01:19:11.364575 kernel: Memory: 8093408K/8383228K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15536K init, 2500K bss, 283604K reserved, 0K cma-reserved) Jan 14 01:19:11.364582 kernel: devtmpfs: initialized Jan 14 01:19:11.364590 kernel: x86/mm: Memory block size: 128MB Jan 14 01:19:11.364598 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 14 01:19:11.364607 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 01:19:11.364617 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 14 01:19:11.364625 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 01:19:11.364634 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 01:19:11.364642 kernel: audit: initializing netlink subsys (disabled) Jan 14 01:19:11.364650 kernel: audit: type=2000 audit(1768353546.081:1): state=initialized audit_enabled=0 res=1 Jan 14 01:19:11.364658 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 01:19:11.364666 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 01:19:11.364674 kernel: cpuidle: using governor menu Jan 14 01:19:11.364684 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 01:19:11.364693 kernel: dca service started, version 1.12.1 Jan 14 01:19:11.364701 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Jan 14 01:19:11.364709 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Jan 14 01:19:11.364717 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 01:19:11.364724 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 01:19:11.364734 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 01:19:11.364743 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 01:19:11.364751 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 01:19:11.364759 kernel: ACPI: Added _OSI(Module Device) Jan 14 01:19:11.364767 kernel: ACPI: Added _OSI(Processor Device) Jan 14 01:19:11.364775 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 01:19:11.364783 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 01:19:11.364790 kernel: ACPI: Interpreter enabled Jan 14 01:19:11.364800 kernel: ACPI: PM: (supports S0 S5) Jan 14 01:19:11.364809 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 01:19:11.364817 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 01:19:11.364826 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 14 01:19:11.364834 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 14 01:19:11.364841 kernel: iommu: Default domain type: Translated Jan 14 01:19:11.364849 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 01:19:11.364859 kernel: efivars: Registered efivars operations Jan 14 01:19:11.364867 kernel: PCI: Using ACPI for IRQ routing Jan 14 01:19:11.364876 kernel: PCI: System does not support PCI Jan 14 01:19:11.364884 kernel: vgaarb: loaded Jan 14 01:19:11.364893 kernel: clocksource: Switched to clocksource tsc-early Jan 14 01:19:11.364900 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 01:19:11.364908 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 01:19:11.364917 kernel: pnp: PnP ACPI init Jan 14 01:19:11.364925 kernel: pnp: PnP ACPI: found 3 devices Jan 14 01:19:11.364934 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 01:19:11.364942 kernel: NET: Registered PF_INET protocol family Jan 14 01:19:11.364951 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 14 01:19:11.364960 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 14 01:19:11.364967 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 01:19:11.364977 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 01:19:11.364985 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 14 01:19:11.364994 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 14 01:19:11.365002 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 01:19:11.365010 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 01:19:11.365019 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 01:19:11.365027 kernel: NET: Registered PF_XDP protocol family Jan 14 01:19:11.365036 kernel: PCI: CLS 0 bytes, default 64 Jan 14 01:19:11.365044 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 14 01:19:11.365052 kernel: software IO TLB: mapped [mem 0x000000003a9ad000-0x000000003e9ad000] (64MB) Jan 14 01:19:11.365060 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Jan 14 01:19:11.365069 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Jan 14 01:19:11.365077 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Jan 14 01:19:11.365086 kernel: clocksource: Switched to clocksource tsc Jan 14 01:19:11.365095 kernel: Initialise system trusted keyrings Jan 14 01:19:11.365103 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 14 01:19:11.365111 kernel: Key type asymmetric registered Jan 14 01:19:11.365119 kernel: Asymmetric key parser 'x509' registered Jan 14 01:19:11.365128 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 14 01:19:11.365137 kernel: io scheduler mq-deadline registered Jan 14 01:19:11.365145 kernel: io scheduler kyber registered Jan 14 01:19:11.365154 kernel: io scheduler bfq registered Jan 14 01:19:11.365162 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 01:19:11.365170 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 01:19:11.365178 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 01:19:11.365187 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 14 01:19:11.365195 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 01:19:11.365204 kernel: i8042: PNP: No PS/2 controller found. Jan 14 01:19:11.365354 kernel: rtc_cmos 00:02: registered as rtc0 Jan 14 01:19:11.365473 kernel: rtc_cmos 00:02: setting system clock to 2026-01-14T01:19:07 UTC (1768353547) Jan 14 01:19:11.365605 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 14 01:19:11.365617 kernel: intel_pstate: Intel P-state driver initializing Jan 14 01:19:11.365627 kernel: efifb: probing for efifb Jan 14 01:19:11.365635 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 14 01:19:11.365647 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 14 01:19:11.365657 kernel: efifb: scrolling: redraw Jan 14 01:19:11.365666 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 14 01:19:11.365675 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 01:19:11.365683 kernel: fb0: EFI VGA frame buffer device Jan 14 01:19:11.365692 kernel: pstore: Using crash dump compression: deflate Jan 14 01:19:11.365700 kernel: pstore: Registered efi_pstore as persistent store backend Jan 14 01:19:11.365714 kernel: NET: Registered PF_INET6 protocol family Jan 14 01:19:11.365722 kernel: Segment Routing with IPv6 Jan 14 01:19:11.365731 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 01:19:11.365740 kernel: NET: Registered PF_PACKET protocol family Jan 14 01:19:11.365748 kernel: Key type dns_resolver registered Jan 14 01:19:11.365757 kernel: IPI shorthand broadcast: enabled Jan 14 01:19:11.365765 kernel: sched_clock: Marking stable (1877121373, 89251890)->(2278219394, -311846131) Jan 14 01:19:11.365774 kernel: registered taskstats version 1 Jan 14 01:19:11.365784 kernel: Loading compiled-in X.509 certificates Jan 14 01:19:11.365792 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: e43fcdb17feb86efe6ca4b76910b93467fb95f4f' Jan 14 01:19:11.365800 kernel: Demotion targets for Node 0: null Jan 14 01:19:11.365809 kernel: Key type .fscrypt registered Jan 14 01:19:11.365817 kernel: Key type fscrypt-provisioning registered Jan 14 01:19:11.365826 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 01:19:11.365834 kernel: ima: Allocated hash algorithm: sha1 Jan 14 01:19:11.365846 kernel: ima: No architecture policies found Jan 14 01:19:11.365856 kernel: clk: Disabling unused clocks Jan 14 01:19:11.365865 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 14 01:19:11.365874 kernel: Write protecting the kernel read-only data: 47104k Jan 14 01:19:11.365883 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 14 01:19:11.365891 kernel: Run /init as init process Jan 14 01:19:11.365901 kernel: with arguments: Jan 14 01:19:11.365914 kernel: /init Jan 14 01:19:11.365922 kernel: with environment: Jan 14 01:19:11.365931 kernel: HOME=/ Jan 14 01:19:11.365939 kernel: TERM=linux Jan 14 01:19:11.365948 kernel: hv_vmbus: Vmbus version:5.3 Jan 14 01:19:11.365957 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 14 01:19:11.365966 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 14 01:19:11.365975 kernel: PTP clock support registered Jan 14 01:19:11.365986 kernel: hv_utils: Registering HyperV Utility Driver Jan 14 01:19:11.365996 kernel: hv_vmbus: registering driver hv_utils Jan 14 01:19:11.366006 kernel: hv_utils: Shutdown IC version 3.2 Jan 14 01:19:11.366014 kernel: hv_utils: Heartbeat IC version 3.0 Jan 14 01:19:11.366024 kernel: hv_utils: TimeSync IC version 4.0 Jan 14 01:19:11.366033 kernel: SCSI subsystem initialized Jan 14 01:19:11.366042 kernel: hv_vmbus: registering driver hv_pci Jan 14 01:19:11.366198 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Jan 14 01:19:11.366323 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Jan 14 01:19:11.366467 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Jan 14 01:19:11.366606 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Jan 14 01:19:11.366748 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Jan 14 01:19:11.366858 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Jan 14 01:19:11.366961 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Jan 14 01:19:11.367066 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Jan 14 01:19:11.367075 kernel: hv_vmbus: registering driver hv_storvsc Jan 14 01:19:11.367189 kernel: scsi host0: storvsc_host_t Jan 14 01:19:11.367308 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 14 01:19:11.367319 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 14 01:19:11.367326 kernel: hv_vmbus: registering driver hid_hyperv Jan 14 01:19:11.367334 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 14 01:19:11.368145 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 14 01:19:11.368166 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 14 01:19:11.368180 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 14 01:19:11.368287 kernel: nvme nvme0: pci function c05b:00:00.0 Jan 14 01:19:11.368410 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Jan 14 01:19:11.368515 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 14 01:19:11.368527 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 14 01:19:11.368646 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 14 01:19:11.368657 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 14 01:19:11.368757 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 14 01:19:11.368768 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 01:19:11.368777 kernel: device-mapper: uevent: version 1.0.3 Jan 14 01:19:11.368786 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 14 01:19:11.368795 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 14 01:19:11.368815 kernel: raid6: avx512x4 gen() 43560 MB/s Jan 14 01:19:11.368825 kernel: raid6: avx512x2 gen() 42742 MB/s Jan 14 01:19:11.368834 kernel: raid6: avx512x1 gen() 26740 MB/s Jan 14 01:19:11.368843 kernel: raid6: avx2x4 gen() 39253 MB/s Jan 14 01:19:11.368852 kernel: raid6: avx2x2 gen() 40527 MB/s Jan 14 01:19:11.368861 kernel: raid6: avx2x1 gen() 30145 MB/s Jan 14 01:19:11.368869 kernel: raid6: using algorithm avx512x4 gen() 43560 MB/s Jan 14 01:19:11.368879 kernel: raid6: .... xor() 7585 MB/s, rmw enabled Jan 14 01:19:11.368888 kernel: raid6: using avx512x2 recovery algorithm Jan 14 01:19:11.368897 kernel: xor: automatically using best checksumming function avx Jan 14 01:19:11.368906 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 01:19:11.368916 kernel: BTRFS: device fsid cd6116b6-e1b6-44f4-b1e2-5e7c5565b295 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (910) Jan 14 01:19:11.368925 kernel: BTRFS info (device dm-0): first mount of filesystem cd6116b6-e1b6-44f4-b1e2-5e7c5565b295 Jan 14 01:19:11.368934 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:19:11.368944 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 14 01:19:11.368953 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 01:19:11.368962 kernel: BTRFS info (device dm-0): enabling free space tree Jan 14 01:19:11.368970 kernel: loop: module loaded Jan 14 01:19:11.368979 kernel: loop0: detected capacity change from 0 to 100544 Jan 14 01:19:11.368988 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 01:19:11.368998 systemd[1]: Successfully made /usr/ read-only. Jan 14 01:19:11.369012 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 01:19:11.369022 systemd[1]: Detected virtualization microsoft. Jan 14 01:19:11.369031 systemd[1]: Detected architecture x86-64. Jan 14 01:19:11.369039 systemd[1]: Running in initrd. Jan 14 01:19:11.369049 systemd[1]: No hostname configured, using default hostname. Jan 14 01:19:11.369058 systemd[1]: Hostname set to . Jan 14 01:19:11.369069 systemd[1]: Initializing machine ID from random generator. Jan 14 01:19:11.369079 systemd[1]: Queued start job for default target initrd.target. Jan 14 01:19:11.369089 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 01:19:11.369098 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 01:19:11.369107 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 01:19:11.369117 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 01:19:11.369129 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 01:19:11.369139 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 01:19:11.369148 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 01:19:11.369158 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 01:19:11.369168 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 01:19:11.369178 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 14 01:19:11.369187 systemd[1]: Reached target paths.target - Path Units. Jan 14 01:19:11.369196 systemd[1]: Reached target slices.target - Slice Units. Jan 14 01:19:11.369207 systemd[1]: Reached target swap.target - Swaps. Jan 14 01:19:11.369216 systemd[1]: Reached target timers.target - Timer Units. Jan 14 01:19:11.369227 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 01:19:11.369236 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 01:19:11.369245 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 01:19:11.369255 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 01:19:11.369264 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 14 01:19:11.369273 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 01:19:11.369283 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 01:19:11.369293 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 01:19:11.369303 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 01:19:11.369312 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 01:19:11.369321 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 01:19:11.369331 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 01:19:11.369340 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 01:19:11.369350 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 14 01:19:11.369361 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 01:19:11.369370 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 01:19:11.369379 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 01:19:11.369389 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:19:11.369400 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 01:19:11.369410 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 01:19:11.369432 systemd-journald[1046]: Collecting audit messages is enabled. Jan 14 01:19:11.369456 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 01:19:11.369466 kernel: audit: type=1130 audit(1768353551.359:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:11.369475 systemd-journald[1046]: Journal started Jan 14 01:19:11.369539 systemd-journald[1046]: Runtime Journal (/run/log/journal/fa508bae5b724672a37018c9934f72a4) is 8M, max 158.5M, 150.5M free. Jan 14 01:19:11.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:11.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:11.373653 kernel: audit: type=1130 audit(1768353551.369:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:11.373678 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 01:19:11.381513 kernel: audit: type=1130 audit(1768353551.374:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:11.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:11.378609 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 01:19:11.387619 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 01:19:11.489540 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 01:19:11.493507 kernel: Bridge firewalling registered Jan 14 01:19:11.493724 systemd-modules-load[1050]: Inserted module 'br_netfilter' Jan 14 01:19:11.495693 systemd-tmpfiles[1058]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 14 01:19:11.497326 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 01:19:11.497852 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 01:19:11.507643 kernel: audit: type=1130 audit(1768353551.497:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:11.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:11.507903 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 01:19:11.511548 kernel: audit: type=1130 audit(1768353551.497:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:11.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:11.514644 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 01:19:11.520981 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 01:19:11.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:11.525500 kernel: audit: type=1130 audit(1768353551.520:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:11.549313 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 01:19:11.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:11.557500 kernel: audit: type=1130 audit(1768353551.548:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:11.576309 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 01:19:11.581744 kernel: audit: type=1130 audit(1768353551.577:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:11.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:11.581044 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:19:11.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:11.587216 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 01:19:11.596139 kernel: audit: type=1130 audit(1768353551.585:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:11.599000 audit: BPF prog-id=6 op=LOAD Jan 14 01:19:11.602501 kernel: audit: type=1334 audit(1768353551.599:11): prog-id=6 op=LOAD Jan 14 01:19:11.604625 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 01:19:11.617641 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 01:19:11.632368 kernel: audit: type=1130 audit(1768353551.616:12): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:11.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:11.618979 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 01:19:11.710475 systemd-resolved[1073]: Positive Trust Anchors: Jan 14 01:19:11.711461 systemd-resolved[1073]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 01:19:11.711467 systemd-resolved[1073]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 01:19:11.711514 systemd-resolved[1073]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 01:19:11.752175 dracut-cmdline[1083]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ef461ed71f713584f576c99df12ffb04dd99b33cd2d16edeb307d0cf2f5b4260 Jan 14 01:19:11.772440 systemd-resolved[1073]: Defaulting to hostname 'linux'. Jan 14 01:19:11.775094 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 01:19:11.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:11.778462 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 01:19:11.788738 kernel: audit: type=1130 audit(1768353551.777:13): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:11.895512 kernel: Loading iSCSI transport class v2.0-870. Jan 14 01:19:11.975501 kernel: iscsi: registered transport (tcp) Jan 14 01:19:12.025837 kernel: iscsi: registered transport (qla4xxx) Jan 14 01:19:12.025889 kernel: QLogic iSCSI HBA Driver Jan 14 01:19:12.081581 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 01:19:12.094647 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 01:19:12.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:12.102503 kernel: audit: type=1130 audit(1768353552.096:14): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:12.102923 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 01:19:12.133517 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 01:19:12.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:12.137655 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 01:19:12.142588 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 01:19:12.168585 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 01:19:12.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:12.171000 audit: BPF prog-id=7 op=LOAD Jan 14 01:19:12.171000 audit: BPF prog-id=8 op=LOAD Jan 14 01:19:12.174658 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 01:19:12.199342 systemd-udevd[1320]: Using default interface naming scheme 'v257'. Jan 14 01:19:12.208765 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 01:19:12.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:12.211168 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 01:19:12.235530 dracut-pre-trigger[1379]: rd.md=0: removing MD RAID activation Jan 14 01:19:12.247449 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 01:19:12.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:12.249000 audit: BPF prog-id=9 op=LOAD Jan 14 01:19:12.253167 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 01:19:12.268260 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 01:19:12.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:12.271946 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 01:19:12.299727 systemd-networkd[1446]: lo: Link UP Jan 14 01:19:12.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:12.299732 systemd-networkd[1446]: lo: Gained carrier Jan 14 01:19:12.300115 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 01:19:12.303420 systemd[1]: Reached target network.target - Network. Jan 14 01:19:12.324283 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 01:19:12.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:12.330891 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 01:19:12.410913 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 01:19:12.411000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:12.419542 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#274 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 14 01:19:12.411050 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:19:12.412617 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:19:12.420307 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:19:12.429341 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 01:19:12.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:12.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:12.429424 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:19:12.435736 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:19:12.444562 kernel: hv_vmbus: registering driver hv_netvsc Jan 14 01:19:12.458496 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d6cd812 (unnamed net_device) (uninitialized): VF slot 1 added Jan 14 01:19:12.465507 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 01:19:12.471284 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:19:12.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:12.501578 systemd-networkd[1446]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:19:12.501584 systemd-networkd[1446]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 01:19:12.502817 systemd-networkd[1446]: eth0: Link UP Jan 14 01:19:12.503199 systemd-networkd[1446]: eth0: Gained carrier Jan 14 01:19:12.503210 systemd-networkd[1446]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:19:12.516799 kernel: AES CTR mode by8 optimization enabled Jan 14 01:19:12.520553 systemd-networkd[1446]: eth0: DHCPv4 address 10.200.4.7/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 14 01:19:12.650498 kernel: nvme nvme0: using unchecked data buffer Jan 14 01:19:12.746727 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Jan 14 01:19:12.748118 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 01:19:12.869603 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Jan 14 01:19:12.908168 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Jan 14 01:19:12.921324 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jan 14 01:19:13.023175 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 01:19:13.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:13.026227 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 01:19:13.031312 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 01:19:13.034046 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 01:19:13.040565 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 01:19:13.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:13.087609 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 01:19:13.483076 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Jan 14 01:19:13.483345 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Jan 14 01:19:13.485554 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Jan 14 01:19:13.486901 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Jan 14 01:19:13.492743 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Jan 14 01:19:13.496680 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Jan 14 01:19:13.501867 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Jan 14 01:19:13.501939 kernel: pci 7870:00:00.0: enabling Extended Tags Jan 14 01:19:13.517737 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Jan 14 01:19:13.517948 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Jan 14 01:19:13.521587 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Jan 14 01:19:13.563006 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Jan 14 01:19:13.573495 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Jan 14 01:19:13.575498 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d6cd812 eth0: VF registering: eth1 Jan 14 01:19:13.575658 kernel: mana 7870:00:00.0 eth1: joined to eth0 Jan 14 01:19:13.580445 systemd-networkd[1446]: eth1: Interface name change detected, renamed to enP30832s1. Jan 14 01:19:13.584609 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Jan 14 01:19:13.686504 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jan 14 01:19:13.690251 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 14 01:19:13.690504 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d6cd812 eth0: Data path switched to VF: enP30832s1 Jan 14 01:19:13.690823 systemd-networkd[1446]: enP30832s1: Link UP Jan 14 01:19:13.691828 systemd-networkd[1446]: enP30832s1: Gained carrier Jan 14 01:19:13.795591 systemd-networkd[1446]: eth0: Gained IPv6LL Jan 14 01:19:14.100516 disk-uuid[1606]: Warning: The kernel is still using the old partition table. Jan 14 01:19:14.100516 disk-uuid[1606]: The new table will be used at the next reboot or after you Jan 14 01:19:14.100516 disk-uuid[1606]: run partprobe(8) or kpartx(8) Jan 14 01:19:14.100516 disk-uuid[1606]: The operation has completed successfully. Jan 14 01:19:14.111273 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 01:19:14.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:14.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:14.111375 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 01:19:14.114343 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 01:19:14.171504 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1652) Jan 14 01:19:14.174279 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:19:14.174315 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:19:14.198977 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 14 01:19:14.199021 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 14 01:19:14.200046 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 14 01:19:14.205501 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:19:14.205956 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 01:19:14.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:14.210441 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 01:19:15.472310 ignition[1671]: Ignition 2.24.0 Jan 14 01:19:15.472325 ignition[1671]: Stage: fetch-offline Jan 14 01:19:15.474312 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 01:19:15.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:15.472441 ignition[1671]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:19:15.480959 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 01:19:15.472450 ignition[1671]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 01:19:15.472554 ignition[1671]: parsed url from cmdline: "" Jan 14 01:19:15.472557 ignition[1671]: no config URL provided Jan 14 01:19:15.472562 ignition[1671]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 01:19:15.472569 ignition[1671]: no config at "/usr/lib/ignition/user.ign" Jan 14 01:19:15.472573 ignition[1671]: failed to fetch config: resource requires networking Jan 14 01:19:15.472723 ignition[1671]: Ignition finished successfully Jan 14 01:19:15.511219 ignition[1677]: Ignition 2.24.0 Jan 14 01:19:15.511230 ignition[1677]: Stage: fetch Jan 14 01:19:15.511629 ignition[1677]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:19:15.511636 ignition[1677]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 01:19:15.511713 ignition[1677]: parsed url from cmdline: "" Jan 14 01:19:15.511717 ignition[1677]: no config URL provided Jan 14 01:19:15.511720 ignition[1677]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 01:19:15.511724 ignition[1677]: no config at "/usr/lib/ignition/user.ign" Jan 14 01:19:15.511741 ignition[1677]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 14 01:19:15.625173 ignition[1677]: GET result: OK Jan 14 01:19:15.625252 ignition[1677]: config has been read from IMDS userdata Jan 14 01:19:15.625276 ignition[1677]: parsing config with SHA512: 53aa1ebb0d628f9d31a115ee1a5066784912b818621918b808effacb52a3094b2f3354211868edb679a726f03c9650b5eae50ba817861a5ee538305fc79ee13a Jan 14 01:19:15.630530 unknown[1677]: fetched base config from "system" Jan 14 01:19:15.630536 unknown[1677]: fetched base config from "system" Jan 14 01:19:15.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:15.630978 ignition[1677]: fetch: fetch complete Jan 14 01:19:15.630542 unknown[1677]: fetched user config from "azure" Jan 14 01:19:15.630983 ignition[1677]: fetch: fetch passed Jan 14 01:19:15.633239 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 01:19:15.631022 ignition[1677]: Ignition finished successfully Jan 14 01:19:15.635912 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 01:19:15.659111 ignition[1683]: Ignition 2.24.0 Jan 14 01:19:15.659122 ignition[1683]: Stage: kargs Jan 14 01:19:15.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:15.661388 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 01:19:15.659337 ignition[1683]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:19:15.663681 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 01:19:15.659344 ignition[1683]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 01:19:15.660114 ignition[1683]: kargs: kargs passed Jan 14 01:19:15.660148 ignition[1683]: Ignition finished successfully Jan 14 01:19:15.694688 ignition[1689]: Ignition 2.24.0 Jan 14 01:19:15.694698 ignition[1689]: Stage: disks Jan 14 01:19:15.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:15.696811 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 01:19:15.694907 ignition[1689]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:19:15.700751 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 01:19:15.694914 ignition[1689]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 01:19:15.703533 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 01:19:15.695672 ignition[1689]: disks: disks passed Jan 14 01:19:15.707527 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 01:19:15.695704 ignition[1689]: Ignition finished successfully Jan 14 01:19:15.710569 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 01:19:15.714583 systemd[1]: Reached target basic.target - Basic System. Jan 14 01:19:15.717383 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 01:19:15.805549 systemd-fsck[1697]: ROOT: clean, 15/6361680 files, 408771/6359552 blocks Jan 14 01:19:15.811408 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 01:19:15.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:15.814744 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 01:19:16.176504 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 9c98b0a3-27fc-41c4-a169-349b38bd9ceb r/w with ordered data mode. Quota mode: none. Jan 14 01:19:16.177234 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 01:19:16.180962 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 01:19:16.217496 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 01:19:16.222572 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 01:19:16.225462 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 14 01:19:16.228989 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 01:19:16.229657 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 01:19:16.240098 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 01:19:16.245601 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1706) Jan 14 01:19:16.244673 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 01:19:16.250345 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:19:16.250370 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:19:16.259025 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 14 01:19:16.259068 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 14 01:19:16.260047 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 14 01:19:16.261476 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 01:19:17.055474 coreos-metadata[1708]: Jan 14 01:19:17.055 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 01:19:17.059604 coreos-metadata[1708]: Jan 14 01:19:17.058 INFO Fetch successful Jan 14 01:19:17.059604 coreos-metadata[1708]: Jan 14 01:19:17.058 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 14 01:19:17.357988 coreos-metadata[1708]: Jan 14 01:19:17.357 INFO Fetch successful Jan 14 01:19:17.359211 coreos-metadata[1708]: Jan 14 01:19:17.358 INFO wrote hostname ci-4578.0.0-p-9807086b3c to /sysroot/etc/hostname Jan 14 01:19:17.362373 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 01:19:17.370944 kernel: kauditd_printk_skb: 24 callbacks suppressed Jan 14 01:19:17.370965 kernel: audit: type=1130 audit(1768353557.362:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:17.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:17.923373 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 01:19:17.932964 kernel: audit: type=1130 audit(1768353557.923:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:17.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:17.924771 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 01:19:17.927607 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 01:19:17.974640 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:19:17.974342 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 01:19:17.995341 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 01:19:17.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:18.059516 kernel: audit: type=1130 audit(1768353557.995:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:18.059542 ignition[1809]: INFO : Ignition 2.24.0 Jan 14 01:19:18.059542 ignition[1809]: INFO : Stage: mount Jan 14 01:19:18.059542 ignition[1809]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 01:19:18.059542 ignition[1809]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 01:19:18.059542 ignition[1809]: INFO : mount: mount passed Jan 14 01:19:18.059542 ignition[1809]: INFO : Ignition finished successfully Jan 14 01:19:18.081083 kernel: audit: type=1130 audit(1768353558.067:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:18.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:18.001311 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 01:19:18.071423 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 01:19:18.085464 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 01:19:18.114501 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1820) Jan 14 01:19:18.116515 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 37f804f9-71c0-44d1-975c-4a397de322e7 Jan 14 01:19:18.116587 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:19:18.128573 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 14 01:19:18.128618 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 14 01:19:18.129757 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 14 01:19:18.131609 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 01:19:18.158399 ignition[1837]: INFO : Ignition 2.24.0 Jan 14 01:19:18.158399 ignition[1837]: INFO : Stage: files Jan 14 01:19:18.163538 ignition[1837]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 01:19:18.163538 ignition[1837]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 01:19:18.163538 ignition[1837]: DEBUG : files: compiled without relabeling support, skipping Jan 14 01:19:18.176443 ignition[1837]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 01:19:18.176443 ignition[1837]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 01:19:18.244330 ignition[1837]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 01:19:18.246183 ignition[1837]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 01:19:18.246183 ignition[1837]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 01:19:18.246089 unknown[1837]: wrote ssh authorized keys file for user: core Jan 14 01:19:18.278443 ignition[1837]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 14 01:19:18.280515 ignition[1837]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 14 01:19:18.310651 ignition[1837]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 01:19:18.381738 ignition[1837]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 14 01:19:18.385565 ignition[1837]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 14 01:19:18.385565 ignition[1837]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 01:19:18.385565 ignition[1837]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 01:19:18.385565 ignition[1837]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 01:19:18.385565 ignition[1837]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 01:19:18.385565 ignition[1837]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 01:19:18.385565 ignition[1837]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 01:19:18.385565 ignition[1837]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 01:19:18.406520 ignition[1837]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 01:19:18.406520 ignition[1837]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 01:19:18.406520 ignition[1837]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 14 01:19:18.406520 ignition[1837]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 14 01:19:18.406520 ignition[1837]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 14 01:19:18.406520 ignition[1837]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 14 01:19:18.823061 ignition[1837]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 14 01:19:18.978056 ignition[1837]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 14 01:19:18.978056 ignition[1837]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 14 01:19:19.024004 ignition[1837]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 01:19:19.031760 ignition[1837]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 01:19:19.031760 ignition[1837]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 14 01:19:19.031760 ignition[1837]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 14 01:19:19.031760 ignition[1837]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 01:19:19.044646 kernel: audit: type=1130 audit(1768353559.036:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.044728 ignition[1837]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 01:19:19.044728 ignition[1837]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 01:19:19.044728 ignition[1837]: INFO : files: files passed Jan 14 01:19:19.044728 ignition[1837]: INFO : Ignition finished successfully Jan 14 01:19:19.035980 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 01:19:19.042931 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 01:19:19.055604 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 01:19:19.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.062799 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 01:19:19.076555 kernel: audit: type=1130 audit(1768353559.064:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.076588 kernel: audit: type=1131 audit(1768353559.064:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.062886 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 01:19:19.085397 initrd-setup-root-after-ignition[1868]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 01:19:19.085397 initrd-setup-root-after-ignition[1868]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 01:19:19.089567 initrd-setup-root-after-ignition[1872]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 01:19:19.096552 kernel: audit: type=1130 audit(1768353559.091:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.087817 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 01:19:19.092882 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 01:19:19.100917 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 01:19:19.133781 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 01:19:19.134866 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 01:19:19.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.138250 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 01:19:19.144118 kernel: audit: type=1130 audit(1768353559.136:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.144194 kernel: audit: type=1131 audit(1768353559.136:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.147217 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 01:19:19.149096 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 01:19:19.150905 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 01:19:19.172202 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 01:19:19.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.175066 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 01:19:19.191328 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 01:19:19.193297 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 01:19:19.195744 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 01:19:19.200811 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 01:19:19.204239 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 01:19:19.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.204284 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 01:19:19.211622 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 01:19:19.213875 systemd[1]: Stopped target basic.target - Basic System. Jan 14 01:19:19.217881 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 01:19:19.221229 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 01:19:19.226521 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 01:19:19.226684 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 14 01:19:19.231230 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 01:19:19.234841 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 01:19:19.237279 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 01:19:19.238798 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 01:19:19.241631 systemd[1]: Stopped target swap.target - Swaps. Jan 14 01:19:19.244670 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 01:19:19.245850 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 01:19:19.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.248424 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 01:19:19.249793 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 01:19:19.253749 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 01:19:19.254768 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 01:19:19.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.259963 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 01:19:19.260005 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 01:19:19.266471 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 01:19:19.266526 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 01:19:19.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.270093 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 01:19:19.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.270130 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 01:19:19.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.274504 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 14 01:19:19.274564 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 01:19:19.278555 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 01:19:19.283595 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 01:19:19.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.289547 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 01:19:19.289603 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 01:19:19.292583 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 01:19:19.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.292627 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 01:19:19.295266 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 01:19:19.295303 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 01:19:19.312369 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 01:19:19.312447 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 01:19:19.319508 ignition[1892]: INFO : Ignition 2.24.0 Jan 14 01:19:19.319508 ignition[1892]: INFO : Stage: umount Jan 14 01:19:19.319508 ignition[1892]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 01:19:19.319508 ignition[1892]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 01:19:19.319508 ignition[1892]: INFO : umount: umount passed Jan 14 01:19:19.319508 ignition[1892]: INFO : Ignition finished successfully Jan 14 01:19:19.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.321186 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 01:19:19.321254 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 01:19:19.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.324989 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 01:19:19.325057 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 01:19:19.326841 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 01:19:19.326884 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 01:19:19.337194 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 01:19:19.337240 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 01:19:19.337922 systemd[1]: Stopped target network.target - Network. Jan 14 01:19:19.343384 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 01:19:19.344849 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 01:19:19.349681 systemd[1]: Stopped target paths.target - Path Units. Jan 14 01:19:19.356540 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 01:19:19.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.360520 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 01:19:19.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.363554 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 01:19:19.366530 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 01:19:19.369561 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 01:19:19.369597 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 01:19:19.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.372548 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 01:19:19.399000 audit: BPF prog-id=6 op=UNLOAD Jan 14 01:19:19.372576 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 01:19:19.374549 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 14 01:19:19.374578 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 14 01:19:19.376986 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 01:19:19.377036 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 01:19:19.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.410000 audit: BPF prog-id=9 op=UNLOAD Jan 14 01:19:19.379003 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 01:19:19.379038 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 01:19:19.383620 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 01:19:19.387575 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 01:19:19.392361 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 01:19:19.395014 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 01:19:19.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.395116 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 01:19:19.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.404256 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 01:19:19.404338 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 01:19:19.411940 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 14 01:19:19.413238 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 01:19:19.413275 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 01:19:19.419545 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 01:19:19.427746 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 01:19:19.460578 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d6cd812 eth0: Data path switched from VF: enP30832s1 Jan 14 01:19:19.460772 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 14 01:19:19.427807 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 01:19:19.432607 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 01:19:19.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.432663 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 01:19:19.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.436120 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 01:19:19.436166 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 01:19:19.438947 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 01:19:19.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.461632 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 01:19:19.461783 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 01:19:19.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.466831 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 01:19:19.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.466915 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 01:19:19.472208 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 01:19:19.472255 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 01:19:19.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.475910 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 01:19:19.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.475945 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 01:19:19.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.478528 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 01:19:19.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.478576 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 01:19:19.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.485623 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 01:19:19.485670 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 01:19:19.489651 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 01:19:19.489690 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 01:19:19.497632 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 01:19:19.502533 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 14 01:19:19.502600 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 01:19:19.504791 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 01:19:19.504832 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 01:19:19.507876 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 14 01:19:19.507924 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 01:19:19.511250 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 01:19:19.511304 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 01:19:19.515298 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 01:19:19.515343 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:19:19.518553 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 01:19:19.518624 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 01:19:19.724646 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 01:19:19.724779 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 01:19:19.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.728286 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 01:19:19.729992 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 01:19:19.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:19.730042 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 01:19:19.736815 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 01:19:19.752367 systemd[1]: Switching root. Jan 14 01:19:19.832627 systemd-journald[1046]: Journal stopped Jan 14 01:19:24.263628 systemd-journald[1046]: Received SIGTERM from PID 1 (systemd). Jan 14 01:19:24.263668 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 01:19:24.263689 kernel: SELinux: policy capability open_perms=1 Jan 14 01:19:24.263700 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 01:19:24.263711 kernel: SELinux: policy capability always_check_network=0 Jan 14 01:19:24.263722 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 01:19:24.263732 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 01:19:24.263743 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 01:19:24.263757 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 01:19:24.263767 kernel: SELinux: policy capability userspace_initial_context=0 Jan 14 01:19:24.263778 systemd[1]: Successfully loaded SELinux policy in 198.093ms. Jan 14 01:19:24.263790 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 3.985ms. Jan 14 01:19:24.263801 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 01:19:24.263813 systemd[1]: Detected virtualization microsoft. Jan 14 01:19:24.263823 systemd[1]: Detected architecture x86-64. Jan 14 01:19:24.263833 systemd[1]: Detected first boot. Jan 14 01:19:24.263843 systemd[1]: Hostname set to . Jan 14 01:19:24.263855 systemd[1]: Initializing machine ID from random generator. Jan 14 01:19:24.263865 zram_generator::config[1935]: No configuration found. Jan 14 01:19:24.263875 kernel: Guest personality initialized and is inactive Jan 14 01:19:24.263884 kernel: VMCI host device registered (name=vmci, major=10, minor=259) Jan 14 01:19:24.263892 kernel: Initialized host personality Jan 14 01:19:24.263901 kernel: NET: Registered PF_VSOCK protocol family Jan 14 01:19:24.263911 systemd[1]: Populated /etc with preset unit settings. Jan 14 01:19:24.263922 kernel: kauditd_printk_skb: 46 callbacks suppressed Jan 14 01:19:24.263931 kernel: audit: type=1334 audit(1768353563.855:95): prog-id=12 op=LOAD Jan 14 01:19:24.263941 kernel: audit: type=1334 audit(1768353563.855:96): prog-id=3 op=UNLOAD Jan 14 01:19:24.263950 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 14 01:19:24.263959 kernel: audit: type=1334 audit(1768353563.855:97): prog-id=13 op=LOAD Jan 14 01:19:24.263968 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 14 01:19:24.263978 kernel: audit: type=1334 audit(1768353563.855:98): prog-id=14 op=LOAD Jan 14 01:19:24.263986 kernel: audit: type=1334 audit(1768353563.855:99): prog-id=4 op=UNLOAD Jan 14 01:19:24.263995 kernel: audit: type=1334 audit(1768353563.855:100): prog-id=5 op=UNLOAD Jan 14 01:19:24.264004 kernel: audit: type=1131 audit(1768353563.856:101): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:24.264013 kernel: audit: type=1334 audit(1768353563.866:102): prog-id=12 op=UNLOAD Jan 14 01:19:24.264023 kernel: audit: type=1130 audit(1768353563.872:103): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:24.264033 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 14 01:19:24.264043 kernel: audit: type=1131 audit(1768353563.872:104): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:24.264056 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 01:19:24.264066 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 01:19:24.264079 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 01:19:24.264090 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 01:19:24.264102 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 01:19:24.264112 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 01:19:24.264122 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 01:19:24.264131 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 01:19:24.264141 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 01:19:24.264153 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 01:19:24.264167 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 01:19:24.264178 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 01:19:24.264189 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 01:19:24.264200 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 01:19:24.264211 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 14 01:19:24.264223 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 01:19:24.264243 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 01:19:24.264254 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 14 01:19:24.264264 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 14 01:19:24.264275 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 14 01:19:24.264286 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 01:19:24.264296 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 01:19:24.264307 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 01:19:24.264320 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 14 01:19:24.264332 systemd[1]: Reached target slices.target - Slice Units. Jan 14 01:19:24.264343 systemd[1]: Reached target swap.target - Swaps. Jan 14 01:19:24.264354 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 01:19:24.264365 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 01:19:24.264378 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 14 01:19:24.264388 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 01:19:24.264398 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 14 01:19:24.264411 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 01:19:24.264421 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 14 01:19:24.264433 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 14 01:19:24.264444 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 01:19:24.264455 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 01:19:24.264467 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 01:19:24.264478 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 01:19:24.264582 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 01:19:24.264593 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 01:19:24.264607 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:19:24.264618 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 01:19:24.264629 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 01:19:24.264640 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 01:19:24.264651 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 01:19:24.264662 systemd[1]: Reached target machines.target - Containers. Jan 14 01:19:24.264672 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 01:19:24.264685 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:19:24.264695 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 01:19:24.264704 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 01:19:24.264715 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 01:19:24.264726 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 01:19:24.264737 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 01:19:24.264749 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 01:19:24.264760 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 01:19:24.264772 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 01:19:24.264784 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 14 01:19:24.264796 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 14 01:19:24.264806 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 14 01:19:24.264816 systemd[1]: Stopped systemd-fsck-usr.service. Jan 14 01:19:24.264829 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:19:24.264840 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 01:19:24.264851 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 01:19:24.264862 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 01:19:24.264874 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 01:19:24.264886 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 14 01:19:24.264900 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 01:19:24.264911 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:19:24.264921 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 01:19:24.264932 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 01:19:24.264943 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 01:19:24.264954 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 01:19:24.264966 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 01:19:24.264979 kernel: fuse: init (API version 7.41) Jan 14 01:19:24.264990 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 01:19:24.265000 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 01:19:24.265037 systemd-journald[2018]: Collecting audit messages is enabled. Jan 14 01:19:24.265064 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 01:19:24.265076 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 01:19:24.265087 systemd-journald[2018]: Journal started Jan 14 01:19:24.265119 systemd-journald[2018]: Runtime Journal (/run/log/journal/8a8fb8c1183b47eda4cb3f3850734017) is 8M, max 158.5M, 150.5M free. Jan 14 01:19:23.979000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 14 01:19:24.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:24.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:24.162000 audit: BPF prog-id=14 op=UNLOAD Jan 14 01:19:24.162000 audit: BPF prog-id=13 op=UNLOAD Jan 14 01:19:24.162000 audit: BPF prog-id=15 op=LOAD Jan 14 01:19:24.162000 audit: BPF prog-id=16 op=LOAD Jan 14 01:19:24.162000 audit: BPF prog-id=17 op=LOAD Jan 14 01:19:24.257000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 14 01:19:24.257000 audit[2018]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffcba33a8c0 a2=4000 a3=0 items=0 ppid=1 pid=2018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:24.257000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 14 01:19:24.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:23.849280 systemd[1]: Queued start job for default target multi-user.target. Jan 14 01:19:23.856924 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 14 01:19:23.857257 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 14 01:19:24.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:24.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:24.271562 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 01:19:24.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:24.273286 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 01:19:24.273439 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 01:19:24.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:24.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:24.275991 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 01:19:24.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:24.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:24.276589 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 01:19:24.279831 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 01:19:24.279992 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 01:19:24.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:24.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:24.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:24.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:24.284761 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 01:19:24.284905 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 01:19:24.287082 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 01:19:24.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:24.292087 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 01:19:24.298462 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 01:19:24.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:24.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:24.307667 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 01:19:24.309697 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 14 01:19:24.314515 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 01:19:24.320589 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 01:19:24.323591 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 01:19:24.323627 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 01:19:24.327443 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 14 01:19:24.333695 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:19:24.333783 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:19:24.341562 kernel: ACPI: bus type drm_connector registered Jan 14 01:19:24.342606 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 01:19:24.347700 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 01:19:24.350599 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 01:19:24.351514 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 01:19:24.354636 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 01:19:24.356462 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 01:19:24.368658 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 01:19:24.372703 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 01:19:24.376213 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 01:19:24.376473 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 01:19:24.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:24.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:24.380876 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 14 01:19:24.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:24.383679 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 01:19:24.385692 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 01:19:24.391705 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 01:19:24.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:24.396716 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 01:19:24.404123 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 14 01:19:24.414623 systemd-journald[2018]: Time spent on flushing to /var/log/journal/8a8fb8c1183b47eda4cb3f3850734017 is 16.990ms for 1132 entries. Jan 14 01:19:24.414623 systemd-journald[2018]: System Journal (/var/log/journal/8a8fb8c1183b47eda4cb3f3850734017) is 8M, max 2.2G, 2.2G free. Jan 14 01:19:24.486914 systemd-journald[2018]: Received client request to flush runtime journal. Jan 14 01:19:24.486955 kernel: loop1: detected capacity change from 0 to 111560 Jan 14 01:19:24.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:24.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:24.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:24.422652 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 01:19:24.459808 systemd-tmpfiles[2071]: ACLs are not supported, ignoring. Jan 14 01:19:24.459820 systemd-tmpfiles[2071]: ACLs are not supported, ignoring. Jan 14 01:19:24.460251 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 01:19:24.463290 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 01:19:24.487627 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 01:19:24.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:24.532478 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 14 01:19:24.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:24.537800 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 01:19:24.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:24.543578 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 01:19:24.676573 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 01:19:24.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:24.677000 audit: BPF prog-id=18 op=LOAD Jan 14 01:19:24.677000 audit: BPF prog-id=19 op=LOAD Jan 14 01:19:24.677000 audit: BPF prog-id=20 op=LOAD Jan 14 01:19:24.680000 audit: BPF prog-id=21 op=LOAD Jan 14 01:19:24.679724 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 14 01:19:24.684636 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 01:19:24.689615 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 01:19:24.691000 audit: BPF prog-id=22 op=LOAD Jan 14 01:19:24.691000 audit: BPF prog-id=23 op=LOAD Jan 14 01:19:24.691000 audit: BPF prog-id=24 op=LOAD Jan 14 01:19:24.693647 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 14 01:19:24.695000 audit: BPF prog-id=25 op=LOAD Jan 14 01:19:24.698000 audit: BPF prog-id=26 op=LOAD Jan 14 01:19:24.698000 audit: BPF prog-id=27 op=LOAD Jan 14 01:19:24.703242 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 01:19:24.714817 systemd-tmpfiles[2099]: ACLs are not supported, ignoring. Jan 14 01:19:24.714830 systemd-tmpfiles[2099]: ACLs are not supported, ignoring. Jan 14 01:19:24.718656 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 01:19:24.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:24.754380 systemd-nsresourced[2100]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 14 01:19:24.755576 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 14 01:19:24.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:24.774315 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 01:19:24.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:24.860890 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 01:19:24.873230 systemd-oomd[2097]: No swap; memory pressure usage will be degraded Jan 14 01:19:24.873642 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 14 01:19:24.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:24.877496 kernel: loop2: detected capacity change from 0 to 48592 Jan 14 01:19:24.910787 systemd-resolved[2098]: Positive Trust Anchors: Jan 14 01:19:24.910799 systemd-resolved[2098]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 01:19:24.910803 systemd-resolved[2098]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 01:19:24.910836 systemd-resolved[2098]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 01:19:25.091167 systemd-resolved[2098]: Using system hostname 'ci-4578.0.0-p-9807086b3c'. Jan 14 01:19:25.092225 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 01:19:25.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:25.093434 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 01:19:25.113646 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 01:19:25.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:25.113000 audit: BPF prog-id=8 op=UNLOAD Jan 14 01:19:25.113000 audit: BPF prog-id=7 op=UNLOAD Jan 14 01:19:25.114000 audit: BPF prog-id=28 op=LOAD Jan 14 01:19:25.114000 audit: BPF prog-id=29 op=LOAD Jan 14 01:19:25.116256 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 01:19:25.143215 systemd-udevd[2121]: Using default interface naming scheme 'v257'. Jan 14 01:19:25.269508 kernel: loop3: detected capacity change from 0 to 219144 Jan 14 01:19:25.313504 kernel: loop4: detected capacity change from 0 to 50784 Jan 14 01:19:25.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:25.366942 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 01:19:25.370000 audit: BPF prog-id=30 op=LOAD Jan 14 01:19:25.372750 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 01:19:25.439589 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 14 01:19:25.476508 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#306 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 14 01:19:25.489502 kernel: mousedev: PS/2 mouse device common for all mice Jan 14 01:19:25.493500 kernel: hv_vmbus: registering driver hyperv_fb Jan 14 01:19:25.497211 kernel: hv_vmbus: registering driver hv_balloon Jan 14 01:19:25.497358 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 14 01:19:25.497424 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 14 01:19:25.500457 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 14 01:19:25.501501 kernel: Console: switching to colour dummy device 80x25 Jan 14 01:19:25.510114 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 01:19:25.512599 systemd-networkd[2133]: lo: Link UP Jan 14 01:19:25.512612 systemd-networkd[2133]: lo: Gained carrier Jan 14 01:19:25.514177 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 01:19:25.515211 systemd-networkd[2133]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:19:25.515218 systemd-networkd[2133]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 01:19:25.517599 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jan 14 01:19:25.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:25.517715 systemd[1]: Reached target network.target - Network. Jan 14 01:19:25.520585 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 14 01:19:25.523514 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 14 01:19:25.530363 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d6cd812 eth0: Data path switched to VF: enP30832s1 Jan 14 01:19:25.527668 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 01:19:25.530878 systemd-networkd[2133]: enP30832s1: Link UP Jan 14 01:19:25.530962 systemd-networkd[2133]: eth0: Link UP Jan 14 01:19:25.530966 systemd-networkd[2133]: eth0: Gained carrier Jan 14 01:19:25.530980 systemd-networkd[2133]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:19:25.536148 systemd-networkd[2133]: enP30832s1: Gained carrier Jan 14 01:19:25.544574 systemd-networkd[2133]: eth0: DHCPv4 address 10.200.4.7/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 14 01:19:25.582539 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 14 01:19:25.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:25.640711 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:19:25.667989 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 01:19:25.668197 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:19:25.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:25.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:25.671595 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:19:25.688817 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 01:19:25.689001 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:19:25.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:25.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:25.699630 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:19:25.747503 kernel: loop5: detected capacity change from 0 to 111560 Jan 14 01:19:25.762621 kernel: loop6: detected capacity change from 0 to 48592 Jan 14 01:19:25.786974 kernel: loop7: detected capacity change from 0 to 219144 Jan 14 01:19:25.799546 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jan 14 01:19:25.822515 kernel: loop1: detected capacity change from 0 to 50784 Jan 14 01:19:25.831697 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jan 14 01:19:25.842009 (sd-merge)[2198]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-azure.raw'. Jan 14 01:19:25.845608 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 01:19:25.849211 (sd-merge)[2198]: Merged extensions into '/usr'. Jan 14 01:19:25.852926 systemd[1]: Reload requested from client PID 2069 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 01:19:25.852940 systemd[1]: Reloading... Jan 14 01:19:25.893502 zram_generator::config[2238]: No configuration found. Jan 14 01:19:26.096820 systemd[1]: Reloading finished in 243 ms. Jan 14 01:19:26.127789 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 01:19:26.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:26.130906 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:19:26.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:26.133431 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 01:19:26.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:26.146255 systemd[1]: Starting ensure-sysext.service... Jan 14 01:19:26.150635 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 01:19:26.152000 audit: BPF prog-id=31 op=LOAD Jan 14 01:19:26.152000 audit: BPF prog-id=21 op=UNLOAD Jan 14 01:19:26.152000 audit: BPF prog-id=32 op=LOAD Jan 14 01:19:26.152000 audit: BPF prog-id=18 op=UNLOAD Jan 14 01:19:26.152000 audit: BPF prog-id=33 op=LOAD Jan 14 01:19:26.152000 audit: BPF prog-id=34 op=LOAD Jan 14 01:19:26.152000 audit: BPF prog-id=19 op=UNLOAD Jan 14 01:19:26.152000 audit: BPF prog-id=20 op=UNLOAD Jan 14 01:19:26.153000 audit: BPF prog-id=35 op=LOAD Jan 14 01:19:26.162000 audit: BPF prog-id=22 op=UNLOAD Jan 14 01:19:26.162000 audit: BPF prog-id=36 op=LOAD Jan 14 01:19:26.162000 audit: BPF prog-id=37 op=LOAD Jan 14 01:19:26.162000 audit: BPF prog-id=23 op=UNLOAD Jan 14 01:19:26.162000 audit: BPF prog-id=24 op=UNLOAD Jan 14 01:19:26.162000 audit: BPF prog-id=38 op=LOAD Jan 14 01:19:26.162000 audit: BPF prog-id=39 op=LOAD Jan 14 01:19:26.162000 audit: BPF prog-id=28 op=UNLOAD Jan 14 01:19:26.162000 audit: BPF prog-id=29 op=UNLOAD Jan 14 01:19:26.163000 audit: BPF prog-id=40 op=LOAD Jan 14 01:19:26.163000 audit: BPF prog-id=30 op=UNLOAD Jan 14 01:19:26.163000 audit: BPF prog-id=41 op=LOAD Jan 14 01:19:26.163000 audit: BPF prog-id=25 op=UNLOAD Jan 14 01:19:26.163000 audit: BPF prog-id=42 op=LOAD Jan 14 01:19:26.163000 audit: BPF prog-id=43 op=LOAD Jan 14 01:19:26.163000 audit: BPF prog-id=26 op=UNLOAD Jan 14 01:19:26.163000 audit: BPF prog-id=27 op=UNLOAD Jan 14 01:19:26.165000 audit: BPF prog-id=44 op=LOAD Jan 14 01:19:26.165000 audit: BPF prog-id=15 op=UNLOAD Jan 14 01:19:26.165000 audit: BPF prog-id=45 op=LOAD Jan 14 01:19:26.165000 audit: BPF prog-id=46 op=LOAD Jan 14 01:19:26.165000 audit: BPF prog-id=16 op=UNLOAD Jan 14 01:19:26.165000 audit: BPF prog-id=17 op=UNLOAD Jan 14 01:19:26.172357 systemd[1]: Reload requested from client PID 2303 ('systemctl') (unit ensure-sysext.service)... Jan 14 01:19:26.172436 systemd[1]: Reloading... Jan 14 01:19:26.190759 systemd-tmpfiles[2304]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 14 01:19:26.190786 systemd-tmpfiles[2304]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 14 01:19:26.191031 systemd-tmpfiles[2304]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 01:19:26.191768 systemd-tmpfiles[2304]: ACLs are not supported, ignoring. Jan 14 01:19:26.191818 systemd-tmpfiles[2304]: ACLs are not supported, ignoring. Jan 14 01:19:26.196566 systemd-tmpfiles[2304]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 01:19:26.196574 systemd-tmpfiles[2304]: Skipping /boot Jan 14 01:19:26.206226 systemd-tmpfiles[2304]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 01:19:26.206244 systemd-tmpfiles[2304]: Skipping /boot Jan 14 01:19:26.239512 zram_generator::config[2338]: No configuration found. Jan 14 01:19:26.397056 systemd[1]: Reloading finished in 224 ms. Jan 14 01:19:26.407000 audit: BPF prog-id=47 op=LOAD Jan 14 01:19:26.407000 audit: BPF prog-id=41 op=UNLOAD Jan 14 01:19:26.407000 audit: BPF prog-id=48 op=LOAD Jan 14 01:19:26.407000 audit: BPF prog-id=49 op=LOAD Jan 14 01:19:26.407000 audit: BPF prog-id=42 op=UNLOAD Jan 14 01:19:26.407000 audit: BPF prog-id=43 op=UNLOAD Jan 14 01:19:26.408000 audit: BPF prog-id=50 op=LOAD Jan 14 01:19:26.408000 audit: BPF prog-id=32 op=UNLOAD Jan 14 01:19:26.408000 audit: BPF prog-id=51 op=LOAD Jan 14 01:19:26.408000 audit: BPF prog-id=52 op=LOAD Jan 14 01:19:26.408000 audit: BPF prog-id=33 op=UNLOAD Jan 14 01:19:26.408000 audit: BPF prog-id=34 op=UNLOAD Jan 14 01:19:26.409000 audit: BPF prog-id=53 op=LOAD Jan 14 01:19:26.409000 audit: BPF prog-id=35 op=UNLOAD Jan 14 01:19:26.409000 audit: BPF prog-id=54 op=LOAD Jan 14 01:19:26.409000 audit: BPF prog-id=55 op=LOAD Jan 14 01:19:26.409000 audit: BPF prog-id=36 op=UNLOAD Jan 14 01:19:26.409000 audit: BPF prog-id=37 op=UNLOAD Jan 14 01:19:26.410000 audit: BPF prog-id=56 op=LOAD Jan 14 01:19:26.410000 audit: BPF prog-id=40 op=UNLOAD Jan 14 01:19:26.413000 audit: BPF prog-id=57 op=LOAD Jan 14 01:19:26.413000 audit: BPF prog-id=44 op=UNLOAD Jan 14 01:19:26.413000 audit: BPF prog-id=58 op=LOAD Jan 14 01:19:26.413000 audit: BPF prog-id=59 op=LOAD Jan 14 01:19:26.413000 audit: BPF prog-id=45 op=UNLOAD Jan 14 01:19:26.413000 audit: BPF prog-id=46 op=UNLOAD Jan 14 01:19:26.414000 audit: BPF prog-id=60 op=LOAD Jan 14 01:19:26.414000 audit: BPF prog-id=61 op=LOAD Jan 14 01:19:26.414000 audit: BPF prog-id=38 op=UNLOAD Jan 14 01:19:26.414000 audit: BPF prog-id=39 op=UNLOAD Jan 14 01:19:26.415000 audit: BPF prog-id=62 op=LOAD Jan 14 01:19:26.415000 audit: BPF prog-id=31 op=UNLOAD Jan 14 01:19:26.418346 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 01:19:26.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:26.427257 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 01:19:26.430514 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 01:19:26.439692 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 01:19:26.443705 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 01:19:26.447121 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 01:19:26.452108 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:19:26.452254 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:19:26.462320 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 01:19:26.466000 audit[2402]: SYSTEM_BOOT pid=2402 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 14 01:19:26.468292 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 01:19:26.473708 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 01:19:26.476687 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:19:26.476892 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:19:26.476987 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:19:26.477084 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:19:26.481225 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 01:19:26.483928 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 01:19:26.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:26.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:26.488061 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 01:19:26.488210 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 01:19:26.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:26.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:26.490898 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 01:19:26.491035 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 01:19:26.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:26.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:26.495477 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 01:19:26.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:26.501862 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:19:26.502166 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:19:26.503054 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 01:19:26.506512 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 01:19:26.511730 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 01:19:26.514651 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:19:26.514800 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:19:26.514877 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:19:26.514944 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:19:26.515825 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 01:19:26.515947 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 01:19:26.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:26.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:26.518029 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 01:19:26.518198 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 01:19:26.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:26.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:26.523101 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 01:19:26.523268 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 01:19:26.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:26.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:26.531811 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:19:26.532023 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:19:26.534606 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 01:19:26.538545 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 01:19:26.544038 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 01:19:26.548559 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 01:19:26.549012 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:19:26.549146 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:19:26.549223 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:19:26.549347 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 01:19:26.549535 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:19:26.551039 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 01:19:26.551238 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 01:19:26.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:26.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:26.555437 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 01:19:26.555638 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 01:19:26.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:26.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:26.558972 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 01:19:26.559131 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 01:19:26.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:26.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:26.565302 systemd[1]: Finished ensure-sysext.service. Jan 14 01:19:26.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:26.568268 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 01:19:26.568842 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 01:19:26.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:26.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:26.573664 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 01:19:26.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:19:26.577275 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 01:19:26.577346 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 01:19:26.884000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 14 01:19:26.884000 audit[2445]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff0acb48b0 a2=420 a3=0 items=0 ppid=2398 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:19:26.884000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 01:19:26.886643 augenrules[2445]: No rules Jan 14 01:19:26.886959 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 01:19:26.887163 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 01:19:27.235609 systemd-networkd[2133]: eth0: Gained IPv6LL Jan 14 01:19:27.237507 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 01:19:27.240858 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 01:19:27.455463 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 01:19:27.457094 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 01:19:31.721798 ldconfig[2400]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 01:19:31.734705 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 01:19:31.737282 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 01:19:31.751039 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 01:19:31.753706 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 01:19:31.755010 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 01:19:31.756520 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 01:19:31.759541 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 14 01:19:31.760774 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 01:19:31.761816 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 01:19:31.763158 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 14 01:19:31.764454 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 14 01:19:31.767542 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 01:19:31.770558 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 01:19:31.770586 systemd[1]: Reached target paths.target - Path Units. Jan 14 01:19:31.771519 systemd[1]: Reached target timers.target - Timer Units. Jan 14 01:19:31.774060 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 01:19:31.777451 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 01:19:31.781054 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 14 01:19:31.783836 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 14 01:19:31.787545 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 14 01:19:31.806878 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 01:19:31.808343 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 14 01:19:31.811054 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 01:19:31.814217 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 01:19:31.816534 systemd[1]: Reached target basic.target - Basic System. Jan 14 01:19:31.817444 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 01:19:31.817479 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 01:19:31.818913 systemd[1]: Starting chronyd.service - NTP client/server... Jan 14 01:19:31.820618 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 01:19:31.826691 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 14 01:19:31.830177 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 01:19:31.834418 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 01:19:31.839210 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 01:19:31.846187 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 01:19:31.849579 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 01:19:31.852274 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 14 01:19:31.853649 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Jan 14 01:19:31.854705 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 14 01:19:31.856364 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 14 01:19:31.857666 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:19:31.864611 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 01:19:31.869236 KVP[2469]: KVP starting; pid is:2469 Jan 14 01:19:31.869684 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 01:19:31.878508 kernel: hv_utils: KVP IC version 4.0 Jan 14 01:19:31.875523 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 14 01:19:31.877292 KVP[2469]: KVP LIC Version: 3.1 Jan 14 01:19:31.879849 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 01:19:31.884818 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 01:19:31.896685 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 01:19:31.898103 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 14 01:19:31.898548 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 01:19:31.901054 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 01:19:31.909627 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 01:19:31.913286 extend-filesystems[2464]: Found /dev/nvme0n1p6 Jan 14 01:19:31.915455 google_oslogin_nss_cache[2466]: oslogin_cache_refresh[2466]: Refreshing passwd entry cache Jan 14 01:19:31.917950 oslogin_cache_refresh[2466]: Refreshing passwd entry cache Jan 14 01:19:31.918609 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 01:19:31.925175 chronyd[2458]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 14 01:19:31.929224 chronyd[2458]: Timezone right/UTC failed leap second check, ignoring Jan 14 01:19:31.929355 chronyd[2458]: Loaded seccomp filter (level 2) Jan 14 01:19:31.932697 systemd[1]: Started chronyd.service - NTP client/server. Jan 14 01:19:31.935926 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 01:19:31.936340 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 01:19:31.942963 google_oslogin_nss_cache[2466]: oslogin_cache_refresh[2466]: Failure getting users, quitting Jan 14 01:19:31.942963 google_oslogin_nss_cache[2466]: oslogin_cache_refresh[2466]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 01:19:31.942963 google_oslogin_nss_cache[2466]: oslogin_cache_refresh[2466]: Refreshing group entry cache Jan 14 01:19:31.943051 extend-filesystems[2464]: Found /dev/nvme0n1p9 Jan 14 01:19:31.941663 oslogin_cache_refresh[2466]: Failure getting users, quitting Jan 14 01:19:31.941677 oslogin_cache_refresh[2466]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 01:19:31.941713 oslogin_cache_refresh[2466]: Refreshing group entry cache Jan 14 01:19:31.946465 extend-filesystems[2464]: Checking size of /dev/nvme0n1p9 Jan 14 01:19:31.952732 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 01:19:31.952934 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 01:19:31.959878 jq[2463]: false Jan 14 01:19:31.960902 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 01:19:31.961112 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 01:19:31.961762 google_oslogin_nss_cache[2466]: oslogin_cache_refresh[2466]: Failure getting groups, quitting Jan 14 01:19:31.961760 oslogin_cache_refresh[2466]: Failure getting groups, quitting Jan 14 01:19:31.961841 google_oslogin_nss_cache[2466]: oslogin_cache_refresh[2466]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 01:19:31.961769 oslogin_cache_refresh[2466]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 01:19:31.963563 jq[2482]: true Jan 14 01:19:31.966850 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 14 01:19:31.967071 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 14 01:19:31.983432 jq[2516]: true Jan 14 01:19:31.989204 extend-filesystems[2464]: Resized partition /dev/nvme0n1p9 Jan 14 01:19:32.015347 extend-filesystems[2525]: resize2fs 1.47.3 (8-Jul-2025) Jan 14 01:19:32.027845 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 6359552 to 6376955 blocks Jan 14 01:19:32.027940 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 6376955 Jan 14 01:19:32.026159 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 01:19:32.094424 update_engine[2480]: I20260114 01:19:32.028576 2480 main.cc:92] Flatcar Update Engine starting Jan 14 01:19:32.094655 tar[2487]: linux-amd64/LICENSE Jan 14 01:19:32.095876 tar[2487]: linux-amd64/helm Jan 14 01:19:32.100291 dbus-daemon[2461]: [system] SELinux support is enabled Jan 14 01:19:32.101952 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 01:19:32.105634 sshd_keygen[2489]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 01:19:32.111358 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 01:19:32.111432 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 01:19:32.116692 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 01:19:32.125305 extend-filesystems[2525]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 14 01:19:32.125305 extend-filesystems[2525]: old_desc_blocks = 4, new_desc_blocks = 4 Jan 14 01:19:32.125305 extend-filesystems[2525]: The filesystem on /dev/nvme0n1p9 is now 6376955 (4k) blocks long. Jan 14 01:19:32.116860 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 01:19:32.142841 update_engine[2480]: I20260114 01:19:32.139336 2480 update_check_scheduler.cc:74] Next update check in 9m9s Jan 14 01:19:32.142874 extend-filesystems[2464]: Resized filesystem in /dev/nvme0n1p9 Jan 14 01:19:32.119204 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 01:19:32.119432 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 01:19:32.135756 systemd[1]: Started update-engine.service - Update Engine. Jan 14 01:19:32.138645 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 01:19:32.141134 systemd-logind[2479]: New seat seat0. Jan 14 01:19:32.141964 systemd-logind[2479]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 14 01:19:32.142253 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 01:19:32.157305 bash[2546]: Updated "/home/core/.ssh/authorized_keys" Jan 14 01:19:32.180843 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 01:19:32.188620 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 14 01:19:32.211012 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 01:19:32.215550 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 01:19:32.219777 coreos-metadata[2460]: Jan 14 01:19:32.218 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 01:19:32.220192 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 14 01:19:32.224638 coreos-metadata[2460]: Jan 14 01:19:32.223 INFO Fetch successful Jan 14 01:19:32.224638 coreos-metadata[2460]: Jan 14 01:19:32.223 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 14 01:19:32.229551 coreos-metadata[2460]: Jan 14 01:19:32.229 INFO Fetch successful Jan 14 01:19:32.229605 coreos-metadata[2460]: Jan 14 01:19:32.229 INFO Fetching http://168.63.129.16/machine/0f515863-523d-495a-b1f7-9885fc99f7e1/c91b4b39%2Df415%2D49aa%2D9e12%2Dcaa11c434b29.%5Fci%2D4578.0.0%2Dp%2D9807086b3c?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 14 01:19:32.236714 coreos-metadata[2460]: Jan 14 01:19:32.236 INFO Fetch successful Jan 14 01:19:32.236714 coreos-metadata[2460]: Jan 14 01:19:32.236 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 14 01:19:32.256793 coreos-metadata[2460]: Jan 14 01:19:32.256 INFO Fetch successful Jan 14 01:19:32.260566 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 01:19:32.260793 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 01:19:32.266757 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 01:19:32.300532 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 14 01:19:32.341196 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 01:19:32.355381 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 01:19:32.362449 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 14 01:19:32.365961 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 01:19:32.368533 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 14 01:19:32.372745 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 01:19:32.440627 locksmithd[2559]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 01:19:32.656524 tar[2487]: linux-amd64/README.md Jan 14 01:19:32.672129 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 14 01:19:33.312293 containerd[2507]: time="2026-01-14T01:19:33Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 14 01:19:33.313916 containerd[2507]: time="2026-01-14T01:19:33.313878076Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 14 01:19:33.325210 containerd[2507]: time="2026-01-14T01:19:33.324446106Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.178µs" Jan 14 01:19:33.325210 containerd[2507]: time="2026-01-14T01:19:33.324476547Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 14 01:19:33.325210 containerd[2507]: time="2026-01-14T01:19:33.324522521Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 14 01:19:33.325210 containerd[2507]: time="2026-01-14T01:19:33.324540812Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 14 01:19:33.325210 containerd[2507]: time="2026-01-14T01:19:33.324657641Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 14 01:19:33.325210 containerd[2507]: time="2026-01-14T01:19:33.324668879Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 01:19:33.325210 containerd[2507]: time="2026-01-14T01:19:33.324709299Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 01:19:33.325210 containerd[2507]: time="2026-01-14T01:19:33.324719107Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 01:19:33.325210 containerd[2507]: time="2026-01-14T01:19:33.324882385Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 01:19:33.325210 containerd[2507]: time="2026-01-14T01:19:33.324892866Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 01:19:33.325210 containerd[2507]: time="2026-01-14T01:19:33.324901817Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 01:19:33.325210 containerd[2507]: time="2026-01-14T01:19:33.324908502Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 01:19:33.325471 containerd[2507]: time="2026-01-14T01:19:33.325011237Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 01:19:33.325471 containerd[2507]: time="2026-01-14T01:19:33.325018840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 14 01:19:33.325471 containerd[2507]: time="2026-01-14T01:19:33.325069454Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 14 01:19:33.325471 containerd[2507]: time="2026-01-14T01:19:33.325194818Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 01:19:33.325471 containerd[2507]: time="2026-01-14T01:19:33.325214251Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 01:19:33.325471 containerd[2507]: time="2026-01-14T01:19:33.325221643Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 14 01:19:33.325471 containerd[2507]: time="2026-01-14T01:19:33.325260575Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 14 01:19:33.325641 containerd[2507]: time="2026-01-14T01:19:33.325549070Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 14 01:19:33.325641 containerd[2507]: time="2026-01-14T01:19:33.325606970Z" level=info msg="metadata content store policy set" policy=shared Jan 14 01:19:33.338320 containerd[2507]: time="2026-01-14T01:19:33.338288134Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 14 01:19:33.338379 containerd[2507]: time="2026-01-14T01:19:33.338335874Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 01:19:33.339764 containerd[2507]: time="2026-01-14T01:19:33.338534872Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 01:19:33.339764 containerd[2507]: time="2026-01-14T01:19:33.338559119Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 14 01:19:33.339764 containerd[2507]: time="2026-01-14T01:19:33.338576613Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 14 01:19:33.339764 containerd[2507]: time="2026-01-14T01:19:33.338589326Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 14 01:19:33.339764 containerd[2507]: time="2026-01-14T01:19:33.338600462Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 14 01:19:33.339764 containerd[2507]: time="2026-01-14T01:19:33.338610594Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 14 01:19:33.339764 containerd[2507]: time="2026-01-14T01:19:33.338621787Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 14 01:19:33.339764 containerd[2507]: time="2026-01-14T01:19:33.338632730Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 14 01:19:33.339764 containerd[2507]: time="2026-01-14T01:19:33.338642940Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 14 01:19:33.339764 containerd[2507]: time="2026-01-14T01:19:33.338652711Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 14 01:19:33.339764 containerd[2507]: time="2026-01-14T01:19:33.338662539Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 14 01:19:33.339764 containerd[2507]: time="2026-01-14T01:19:33.338675500Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 14 01:19:33.339764 containerd[2507]: time="2026-01-14T01:19:33.338773045Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 14 01:19:33.340094 containerd[2507]: time="2026-01-14T01:19:33.338788740Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 14 01:19:33.340094 containerd[2507]: time="2026-01-14T01:19:33.338801813Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 14 01:19:33.340094 containerd[2507]: time="2026-01-14T01:19:33.338816865Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 14 01:19:33.340094 containerd[2507]: time="2026-01-14T01:19:33.338828832Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 14 01:19:33.340094 containerd[2507]: time="2026-01-14T01:19:33.338839427Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 14 01:19:33.340094 containerd[2507]: time="2026-01-14T01:19:33.338850168Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 14 01:19:33.340094 containerd[2507]: time="2026-01-14T01:19:33.338861220Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 14 01:19:33.340094 containerd[2507]: time="2026-01-14T01:19:33.338871906Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 14 01:19:33.340094 containerd[2507]: time="2026-01-14T01:19:33.338882608Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 14 01:19:33.340094 containerd[2507]: time="2026-01-14T01:19:33.338892170Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 14 01:19:33.340094 containerd[2507]: time="2026-01-14T01:19:33.338914239Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 14 01:19:33.340094 containerd[2507]: time="2026-01-14T01:19:33.338955125Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 14 01:19:33.340094 containerd[2507]: time="2026-01-14T01:19:33.338965930Z" level=info msg="Start snapshots syncer" Jan 14 01:19:33.340094 containerd[2507]: time="2026-01-14T01:19:33.339288907Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 14 01:19:33.340361 containerd[2507]: time="2026-01-14T01:19:33.339680270Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 14 01:19:33.340361 containerd[2507]: time="2026-01-14T01:19:33.339729491Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 14 01:19:33.340494 containerd[2507]: time="2026-01-14T01:19:33.339798459Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 14 01:19:33.340494 containerd[2507]: time="2026-01-14T01:19:33.339925426Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 14 01:19:33.340614 containerd[2507]: time="2026-01-14T01:19:33.340599689Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 14 01:19:33.340634 containerd[2507]: time="2026-01-14T01:19:33.340618385Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 14 01:19:33.340634 containerd[2507]: time="2026-01-14T01:19:33.340630197Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 14 01:19:33.340672 containerd[2507]: time="2026-01-14T01:19:33.340641910Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 14 01:19:33.340672 containerd[2507]: time="2026-01-14T01:19:33.340653390Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 14 01:19:33.340672 containerd[2507]: time="2026-01-14T01:19:33.340663056Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 14 01:19:33.340724 containerd[2507]: time="2026-01-14T01:19:33.340672690Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 14 01:19:33.340724 containerd[2507]: time="2026-01-14T01:19:33.340690802Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 14 01:19:33.340760 containerd[2507]: time="2026-01-14T01:19:33.340725405Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 01:19:33.340760 containerd[2507]: time="2026-01-14T01:19:33.340738076Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 01:19:33.340760 containerd[2507]: time="2026-01-14T01:19:33.340745918Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 01:19:33.340760 containerd[2507]: time="2026-01-14T01:19:33.340754888Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 01:19:33.340833 containerd[2507]: time="2026-01-14T01:19:33.340761914Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 14 01:19:33.340833 containerd[2507]: time="2026-01-14T01:19:33.340815832Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 14 01:19:33.340833 containerd[2507]: time="2026-01-14T01:19:33.340826026Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 14 01:19:33.340881 containerd[2507]: time="2026-01-14T01:19:33.340836955Z" level=info msg="runtime interface created" Jan 14 01:19:33.340881 containerd[2507]: time="2026-01-14T01:19:33.340842260Z" level=info msg="created NRI interface" Jan 14 01:19:33.340881 containerd[2507]: time="2026-01-14T01:19:33.340849935Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 14 01:19:33.340881 containerd[2507]: time="2026-01-14T01:19:33.340861378Z" level=info msg="Connect containerd service" Jan 14 01:19:33.340881 containerd[2507]: time="2026-01-14T01:19:33.340879125Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 01:19:33.341939 containerd[2507]: time="2026-01-14T01:19:33.341911927Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 01:19:33.421344 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:19:33.432707 (kubelet)[2631]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:19:33.968185 containerd[2507]: time="2026-01-14T01:19:33.966969156Z" level=info msg="Start subscribing containerd event" Jan 14 01:19:33.968185 containerd[2507]: time="2026-01-14T01:19:33.967031870Z" level=info msg="Start recovering state" Jan 14 01:19:33.968185 containerd[2507]: time="2026-01-14T01:19:33.967131308Z" level=info msg="Start event monitor" Jan 14 01:19:33.968185 containerd[2507]: time="2026-01-14T01:19:33.967141892Z" level=info msg="Start cni network conf syncer for default" Jan 14 01:19:33.968185 containerd[2507]: time="2026-01-14T01:19:33.967150141Z" level=info msg="Start streaming server" Jan 14 01:19:33.968185 containerd[2507]: time="2026-01-14T01:19:33.967162043Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 14 01:19:33.968185 containerd[2507]: time="2026-01-14T01:19:33.967170218Z" level=info msg="runtime interface starting up..." Jan 14 01:19:33.968185 containerd[2507]: time="2026-01-14T01:19:33.967176942Z" level=info msg="starting plugins..." Jan 14 01:19:33.968185 containerd[2507]: time="2026-01-14T01:19:33.967192340Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 14 01:19:33.968185 containerd[2507]: time="2026-01-14T01:19:33.967446495Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 01:19:33.968185 containerd[2507]: time="2026-01-14T01:19:33.967515524Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 01:19:33.968106 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 01:19:33.970524 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 01:19:33.973503 containerd[2507]: time="2026-01-14T01:19:33.973336631Z" level=info msg="containerd successfully booted in 0.661539s" Jan 14 01:19:33.973968 systemd[1]: Startup finished in 4.649s (kernel) + 10.339s (initrd) + 13.102s (userspace) = 28.091s. Jan 14 01:19:34.005509 kubelet[2631]: E0114 01:19:34.004527 2631 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:19:34.008710 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:19:34.008840 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:19:34.009361 systemd[1]: kubelet.service: Consumed 807ms CPU time, 258.2M memory peak. Jan 14 01:19:34.054786 waagent[2596]: 2026-01-14T01:19:34.054719Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jan 14 01:19:34.055407 waagent[2596]: 2026-01-14T01:19:34.055213Z INFO Daemon Daemon OS: flatcar 4578.0.0 Jan 14 01:19:34.056875 waagent[2596]: 2026-01-14T01:19:34.056812Z INFO Daemon Daemon Python: 3.12.11 Jan 14 01:19:34.057931 waagent[2596]: 2026-01-14T01:19:34.057896Z INFO Daemon Daemon Run daemon Jan 14 01:19:34.060501 waagent[2596]: 2026-01-14T01:19:34.058665Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4578.0.0' Jan 14 01:19:34.060501 waagent[2596]: 2026-01-14T01:19:34.058868Z INFO Daemon Daemon Using waagent for provisioning Jan 14 01:19:34.060501 waagent[2596]: 2026-01-14T01:19:34.059275Z INFO Daemon Daemon Activate resource disk Jan 14 01:19:34.060501 waagent[2596]: 2026-01-14T01:19:34.059773Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 14 01:19:34.064442 waagent[2596]: 2026-01-14T01:19:34.062921Z INFO Daemon Daemon Found device: None Jan 14 01:19:34.064442 waagent[2596]: 2026-01-14T01:19:34.063396Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 14 01:19:34.064442 waagent[2596]: 2026-01-14T01:19:34.063602Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 14 01:19:34.064442 waagent[2596]: 2026-01-14T01:19:34.064460Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 14 01:19:34.064442 waagent[2596]: 2026-01-14T01:19:34.064811Z INFO Daemon Daemon Running default provisioning handler Jan 14 01:19:34.075546 waagent[2596]: 2026-01-14T01:19:34.075469Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 14 01:19:34.079086 waagent[2596]: 2026-01-14T01:19:34.077470Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 14 01:19:34.079086 waagent[2596]: 2026-01-14T01:19:34.077696Z INFO Daemon Daemon cloud-init is enabled: False Jan 14 01:19:34.079086 waagent[2596]: 2026-01-14T01:19:34.077905Z INFO Daemon Daemon Copying ovf-env.xml Jan 14 01:19:34.163508 waagent[2596]: 2026-01-14T01:19:34.161843Z INFO Daemon Daemon Successfully mounted dvd Jan 14 01:19:34.187047 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 14 01:19:34.189106 waagent[2596]: 2026-01-14T01:19:34.189057Z INFO Daemon Daemon Detect protocol endpoint Jan 14 01:19:34.194514 waagent[2596]: 2026-01-14T01:19:34.189661Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 14 01:19:34.194514 waagent[2596]: 2026-01-14T01:19:34.189873Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 14 01:19:34.194514 waagent[2596]: 2026-01-14T01:19:34.190065Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 14 01:19:34.194514 waagent[2596]: 2026-01-14T01:19:34.190217Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 14 01:19:34.194514 waagent[2596]: 2026-01-14T01:19:34.190361Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 14 01:19:34.230618 waagent[2596]: 2026-01-14T01:19:34.230587Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 14 01:19:34.231950 waagent[2596]: 2026-01-14T01:19:34.231016Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 14 01:19:34.231950 waagent[2596]: 2026-01-14T01:19:34.231194Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 14 01:19:34.246200 login[2603]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:19:34.251948 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 01:19:34.252723 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 01:19:34.257813 systemd-logind[2479]: New session 1 of user core. Jan 14 01:19:34.299420 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 01:19:34.301680 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 01:19:34.314230 (systemd)[2658]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:19:34.326917 login[2604]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:19:34.334848 systemd-logind[2479]: New session 2 of user core. Jan 14 01:19:34.342896 systemd-logind[2479]: New session 3 of user core. Jan 14 01:19:34.385415 waagent[2596]: 2026-01-14T01:19:34.385365Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 14 01:19:34.386021 waagent[2596]: 2026-01-14T01:19:34.385982Z INFO Daemon Daemon Forcing an update of the goal state. Jan 14 01:19:34.393506 waagent[2596]: 2026-01-14T01:19:34.393196Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 14 01:19:34.414616 waagent[2596]: 2026-01-14T01:19:34.414580Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Jan 14 01:19:34.416770 waagent[2596]: 2026-01-14T01:19:34.416736Z INFO Daemon Jan 14 01:19:34.417839 waagent[2596]: 2026-01-14T01:19:34.417810Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: d225e66c-6113-43a7-bb56-bd6e429750bc eTag: 18375363108189766872 source: Fabric] Jan 14 01:19:34.421505 waagent[2596]: 2026-01-14T01:19:34.420873Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 14 01:19:34.422455 waagent[2596]: 2026-01-14T01:19:34.422430Z INFO Daemon Jan 14 01:19:34.422598 waagent[2596]: 2026-01-14T01:19:34.422578Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 14 01:19:34.430179 waagent[2596]: 2026-01-14T01:19:34.430150Z INFO Daemon Daemon Downloading artifacts profile blob Jan 14 01:19:34.479093 systemd[2658]: Queued start job for default target default.target. Jan 14 01:19:34.486801 systemd[2658]: Created slice app.slice - User Application Slice. Jan 14 01:19:34.487011 systemd[2658]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 14 01:19:34.487080 systemd[2658]: Reached target paths.target - Paths. Jan 14 01:19:34.487225 systemd[2658]: Reached target timers.target - Timers. Jan 14 01:19:34.490584 systemd[2658]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 01:19:34.491379 systemd[2658]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 14 01:19:34.501231 systemd[2658]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 01:19:34.501379 systemd[2658]: Reached target sockets.target - Sockets. Jan 14 01:19:34.505838 waagent[2596]: 2026-01-14T01:19:34.505793Z INFO Daemon Downloaded certificate {'thumbprint': 'FC7C7F0F7FB4622865C49C00A1B340A7F166B66A', 'hasPrivateKey': True} Jan 14 01:19:34.508513 waagent[2596]: 2026-01-14T01:19:34.507807Z INFO Daemon Fetch goal state completed Jan 14 01:19:34.513732 systemd[2658]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 14 01:19:34.513816 systemd[2658]: Reached target basic.target - Basic System. Jan 14 01:19:34.513924 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 01:19:34.514130 systemd[2658]: Reached target default.target - Main User Target. Jan 14 01:19:34.514156 systemd[2658]: Startup finished in 172ms. Jan 14 01:19:34.515917 waagent[2596]: 2026-01-14T01:19:34.514820Z INFO Daemon Daemon Starting provisioning Jan 14 01:19:34.516204 waagent[2596]: 2026-01-14T01:19:34.516153Z INFO Daemon Daemon Handle ovf-env.xml. Jan 14 01:19:34.517303 waagent[2596]: 2026-01-14T01:19:34.516611Z INFO Daemon Daemon Set hostname [ci-4578.0.0-p-9807086b3c] Jan 14 01:19:34.517706 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 01:19:34.518259 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 14 01:19:34.552612 waagent[2596]: 2026-01-14T01:19:34.552182Z INFO Daemon Daemon Publish hostname [ci-4578.0.0-p-9807086b3c] Jan 14 01:19:34.552844 waagent[2596]: 2026-01-14T01:19:34.552812Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 14 01:19:34.553130 waagent[2596]: 2026-01-14T01:19:34.553106Z INFO Daemon Daemon Primary interface is [eth0] Jan 14 01:19:34.560478 systemd-networkd[2133]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:19:34.560503 systemd-networkd[2133]: eth0: Reconfiguring with /usr/lib/systemd/network/zz-default.network. Jan 14 01:19:34.560562 systemd-networkd[2133]: eth0: DHCP lease lost Jan 14 01:19:34.577082 waagent[2596]: 2026-01-14T01:19:34.575713Z INFO Daemon Daemon Create user account if not exists Jan 14 01:19:34.577082 waagent[2596]: 2026-01-14T01:19:34.576244Z INFO Daemon Daemon User core already exists, skip useradd Jan 14 01:19:34.577082 waagent[2596]: 2026-01-14T01:19:34.576671Z INFO Daemon Daemon Configure sudoer Jan 14 01:19:34.584677 waagent[2596]: 2026-01-14T01:19:34.584641Z INFO Daemon Daemon Configure sshd Jan 14 01:19:34.585714 systemd-networkd[2133]: eth0: DHCPv4 address 10.200.4.7/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 14 01:19:34.591236 waagent[2596]: 2026-01-14T01:19:34.591186Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 14 01:19:34.591867 waagent[2596]: 2026-01-14T01:19:34.591629Z INFO Daemon Daemon Deploy ssh public key. Jan 14 01:19:35.689676 waagent[2596]: 2026-01-14T01:19:35.689628Z INFO Daemon Daemon Provisioning complete Jan 14 01:19:35.698080 waagent[2596]: 2026-01-14T01:19:35.698047Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 14 01:19:35.702097 waagent[2596]: 2026-01-14T01:19:35.698587Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 14 01:19:35.702097 waagent[2596]: 2026-01-14T01:19:35.699032Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jan 14 01:19:35.805682 waagent[2704]: 2026-01-14T01:19:35.805601Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jan 14 01:19:35.805947 waagent[2704]: 2026-01-14T01:19:35.805721Z INFO ExtHandler ExtHandler OS: flatcar 4578.0.0 Jan 14 01:19:35.805947 waagent[2704]: 2026-01-14T01:19:35.805778Z INFO ExtHandler ExtHandler Python: 3.12.11 Jan 14 01:19:35.805947 waagent[2704]: 2026-01-14T01:19:35.805825Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Jan 14 01:19:35.842991 waagent[2704]: 2026-01-14T01:19:35.842946Z INFO ExtHandler ExtHandler Distro: flatcar-4578.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.12.11; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jan 14 01:19:35.843115 waagent[2704]: 2026-01-14T01:19:35.843092Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 01:19:35.843178 waagent[2704]: 2026-01-14T01:19:35.843145Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 01:19:35.853033 waagent[2704]: 2026-01-14T01:19:35.852973Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 14 01:19:35.867925 waagent[2704]: 2026-01-14T01:19:35.867897Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Jan 14 01:19:35.868248 waagent[2704]: 2026-01-14T01:19:35.868216Z INFO ExtHandler Jan 14 01:19:35.868296 waagent[2704]: 2026-01-14T01:19:35.868275Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 6c9f7780-dbbe-4bdf-a801-9bfd9055543b eTag: 18375363108189766872 source: Fabric] Jan 14 01:19:35.868497 waagent[2704]: 2026-01-14T01:19:35.868460Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 14 01:19:35.868837 waagent[2704]: 2026-01-14T01:19:35.868810Z INFO ExtHandler Jan 14 01:19:35.868869 waagent[2704]: 2026-01-14T01:19:35.868856Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 14 01:19:35.875289 waagent[2704]: 2026-01-14T01:19:35.875259Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 14 01:19:35.949082 waagent[2704]: 2026-01-14T01:19:35.949002Z INFO ExtHandler Downloaded certificate {'thumbprint': 'FC7C7F0F7FB4622865C49C00A1B340A7F166B66A', 'hasPrivateKey': True} Jan 14 01:19:35.949387 waagent[2704]: 2026-01-14T01:19:35.949357Z INFO ExtHandler Fetch goal state completed Jan 14 01:19:35.965897 waagent[2704]: 2026-01-14T01:19:35.965846Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.5.4 30 Sep 2025 (Library: OpenSSL 3.5.4 30 Sep 2025) Jan 14 01:19:35.970089 waagent[2704]: 2026-01-14T01:19:35.970044Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2704 Jan 14 01:19:35.970190 waagent[2704]: 2026-01-14T01:19:35.970165Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 14 01:19:35.970415 waagent[2704]: 2026-01-14T01:19:35.970393Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jan 14 01:19:35.971410 waagent[2704]: 2026-01-14T01:19:35.971374Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4578.0.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 14 01:19:35.971748 waagent[2704]: 2026-01-14T01:19:35.971716Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4578.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jan 14 01:19:35.971856 waagent[2704]: 2026-01-14T01:19:35.971831Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jan 14 01:19:35.972292 waagent[2704]: 2026-01-14T01:19:35.972262Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 14 01:19:36.023601 waagent[2704]: 2026-01-14T01:19:36.023578Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 14 01:19:36.023736 waagent[2704]: 2026-01-14T01:19:36.023717Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 14 01:19:36.029055 waagent[2704]: 2026-01-14T01:19:36.028708Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 14 01:19:36.033625 systemd[1]: Reload requested from client PID 2719 ('systemctl') (unit waagent.service)... Jan 14 01:19:36.033639 systemd[1]: Reloading... Jan 14 01:19:36.112501 zram_generator::config[2764]: No configuration found. Jan 14 01:19:36.276187 systemd[1]: Reloading finished in 242 ms. Jan 14 01:19:36.301819 waagent[2704]: 2026-01-14T01:19:36.299696Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 14 01:19:36.301819 waagent[2704]: 2026-01-14T01:19:36.299819Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 14 01:19:36.533306 waagent[2704]: 2026-01-14T01:19:36.533206Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 14 01:19:36.533572 waagent[2704]: 2026-01-14T01:19:36.533543Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jan 14 01:19:36.534211 waagent[2704]: 2026-01-14T01:19:36.534177Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 14 01:19:36.534557 waagent[2704]: 2026-01-14T01:19:36.534528Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 14 01:19:36.534802 waagent[2704]: 2026-01-14T01:19:36.534774Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 01:19:36.535011 waagent[2704]: 2026-01-14T01:19:36.534951Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 14 01:19:36.535049 waagent[2704]: 2026-01-14T01:19:36.535024Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 01:19:36.535242 waagent[2704]: 2026-01-14T01:19:36.535208Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 14 01:19:36.535439 waagent[2704]: 2026-01-14T01:19:36.535419Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 01:19:36.535606 waagent[2704]: 2026-01-14T01:19:36.535584Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 14 01:19:36.535686 waagent[2704]: 2026-01-14T01:19:36.535670Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 14 01:19:36.535728 waagent[2704]: 2026-01-14T01:19:36.535708Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 01:19:36.535882 waagent[2704]: 2026-01-14T01:19:36.535861Z INFO EnvHandler ExtHandler Configure routes Jan 14 01:19:36.535964 waagent[2704]: 2026-01-14T01:19:36.535933Z INFO EnvHandler ExtHandler Gateway:None Jan 14 01:19:36.536028 waagent[2704]: 2026-01-14T01:19:36.535998Z INFO EnvHandler ExtHandler Routes:None Jan 14 01:19:36.536237 waagent[2704]: 2026-01-14T01:19:36.536219Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 14 01:19:36.536389 waagent[2704]: 2026-01-14T01:19:36.536368Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 14 01:19:36.537170 waagent[2704]: 2026-01-14T01:19:36.537136Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 14 01:19:36.537170 waagent[2704]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 14 01:19:36.537170 waagent[2704]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Jan 14 01:19:36.537170 waagent[2704]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 14 01:19:36.537170 waagent[2704]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 14 01:19:36.537170 waagent[2704]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 14 01:19:36.537170 waagent[2704]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 14 01:19:36.636686 waagent[2704]: 2026-01-14T01:19:36.636642Z INFO MonitorHandler ExtHandler Network interfaces: Jan 14 01:19:36.636686 waagent[2704]: Executing ['ip', '-a', '-o', 'link']: Jan 14 01:19:36.636686 waagent[2704]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 14 01:19:36.636686 waagent[2704]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:6c:d8:12 brd ff:ff:ff:ff:ff:ff\ alias Network Device\ altname enx7ced8d6cd812 Jan 14 01:19:36.636686 waagent[2704]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:6c:d8:12 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Jan 14 01:19:36.636686 waagent[2704]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 14 01:19:36.636686 waagent[2704]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 14 01:19:36.636686 waagent[2704]: 2: eth0 inet 10.200.4.7/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 14 01:19:36.636686 waagent[2704]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 14 01:19:36.636686 waagent[2704]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 14 01:19:36.636686 waagent[2704]: 2: eth0 inet6 fe80::7eed:8dff:fe6c:d812/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 14 01:19:36.949266 waagent[2704]: 2026-01-14T01:19:36.949184Z INFO ExtHandler ExtHandler Jan 14 01:19:36.949266 waagent[2704]: 2026-01-14T01:19:36.949261Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: f1baa616-5533-4a27-b011-faedb94213bb correlation d5ad58ac-14d4-4e14-89b6-dec85af8428f created: 2026-01-14T01:18:41.363632Z] Jan 14 01:19:36.949631 waagent[2704]: 2026-01-14T01:19:36.949587Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 14 01:19:36.950144 waagent[2704]: 2026-01-14T01:19:36.950113Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jan 14 01:19:36.972608 waagent[2704]: 2026-01-14T01:19:36.972564Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jan 14 01:19:36.972608 waagent[2704]: Try `iptables -h' or 'iptables --help' for more information.) Jan 14 01:19:36.972897 waagent[2704]: 2026-01-14T01:19:36.972873Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 30F3D585-6AFF-4EA3-901D-CA1AA4202685;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jan 14 01:19:37.011554 waagent[2704]: 2026-01-14T01:19:37.011461Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jan 14 01:19:37.011554 waagent[2704]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 01:19:37.011554 waagent[2704]: pkts bytes target prot opt in out source destination Jan 14 01:19:37.011554 waagent[2704]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 14 01:19:37.011554 waagent[2704]: pkts bytes target prot opt in out source destination Jan 14 01:19:37.011554 waagent[2704]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 01:19:37.011554 waagent[2704]: pkts bytes target prot opt in out source destination Jan 14 01:19:37.011554 waagent[2704]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 14 01:19:37.011554 waagent[2704]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 14 01:19:37.011554 waagent[2704]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 14 01:19:37.014017 waagent[2704]: 2026-01-14T01:19:37.013973Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 14 01:19:37.014017 waagent[2704]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 01:19:37.014017 waagent[2704]: pkts bytes target prot opt in out source destination Jan 14 01:19:37.014017 waagent[2704]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 14 01:19:37.014017 waagent[2704]: pkts bytes target prot opt in out source destination Jan 14 01:19:37.014017 waagent[2704]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 01:19:37.014017 waagent[2704]: pkts bytes target prot opt in out source destination Jan 14 01:19:37.014017 waagent[2704]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 14 01:19:37.014017 waagent[2704]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 14 01:19:37.014017 waagent[2704]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 14 01:19:44.229723 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 01:19:44.231059 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:19:44.753512 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:19:44.756549 (kubelet)[2859]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:19:44.786225 kubelet[2859]: E0114 01:19:44.786176 2859 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:19:44.788667 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:19:44.788783 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:19:44.789098 systemd[1]: kubelet.service: Consumed 121ms CPU time, 110.1M memory peak. Jan 14 01:19:54.979793 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 14 01:19:54.981160 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:19:55.443583 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:19:55.455697 (kubelet)[2874]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:19:55.488291 kubelet[2874]: E0114 01:19:55.488235 2874 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:19:55.489795 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:19:55.489927 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:19:55.490277 systemd[1]: kubelet.service: Consumed 120ms CPU time, 110.1M memory peak. Jan 14 01:19:55.711066 chronyd[2458]: Selected source PHC0 Jan 14 01:19:57.620239 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 01:19:57.621680 systemd[1]: Started sshd@0-10.200.4.7:22-10.200.16.10:48068.service - OpenSSH per-connection server daemon (10.200.16.10:48068). Jan 14 01:19:58.288741 sshd[2881]: Accepted publickey for core from 10.200.16.10 port 48068 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:19:58.289872 sshd-session[2881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:19:58.294444 systemd-logind[2479]: New session 4 of user core. Jan 14 01:19:58.299656 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 01:19:58.704112 systemd[1]: Started sshd@1-10.200.4.7:22-10.200.16.10:48074.service - OpenSSH per-connection server daemon (10.200.16.10:48074). Jan 14 01:19:59.240523 sshd[2888]: Accepted publickey for core from 10.200.16.10 port 48074 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:19:59.241330 sshd-session[2888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:19:59.245534 systemd-logind[2479]: New session 5 of user core. Jan 14 01:19:59.254636 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 01:19:59.544149 sshd[2892]: Connection closed by 10.200.16.10 port 48074 Jan 14 01:19:59.545589 sshd-session[2888]: pam_unix(sshd:session): session closed for user core Jan 14 01:19:59.548675 systemd-logind[2479]: Session 5 logged out. Waiting for processes to exit. Jan 14 01:19:59.548820 systemd[1]: sshd@1-10.200.4.7:22-10.200.16.10:48074.service: Deactivated successfully. Jan 14 01:19:59.550280 systemd[1]: session-5.scope: Deactivated successfully. Jan 14 01:19:59.551615 systemd-logind[2479]: Removed session 5. Jan 14 01:19:59.661691 systemd[1]: Started sshd@2-10.200.4.7:22-10.200.16.10:41986.service - OpenSSH per-connection server daemon (10.200.16.10:41986). Jan 14 01:20:00.201866 sshd[2898]: Accepted publickey for core from 10.200.16.10 port 41986 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:20:00.203064 sshd-session[2898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:20:00.206786 systemd-logind[2479]: New session 6 of user core. Jan 14 01:20:00.209652 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 01:20:00.502439 sshd[2902]: Connection closed by 10.200.16.10 port 41986 Jan 14 01:20:00.503632 sshd-session[2898]: pam_unix(sshd:session): session closed for user core Jan 14 01:20:00.507178 systemd-logind[2479]: Session 6 logged out. Waiting for processes to exit. Jan 14 01:20:00.507442 systemd[1]: sshd@2-10.200.4.7:22-10.200.16.10:41986.service: Deactivated successfully. Jan 14 01:20:00.508958 systemd[1]: session-6.scope: Deactivated successfully. Jan 14 01:20:00.510471 systemd-logind[2479]: Removed session 6. Jan 14 01:20:00.616836 systemd[1]: Started sshd@3-10.200.4.7:22-10.200.16.10:41990.service - OpenSSH per-connection server daemon (10.200.16.10:41990). Jan 14 01:20:01.158556 sshd[2908]: Accepted publickey for core from 10.200.16.10 port 41990 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:20:01.159658 sshd-session[2908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:20:01.163566 systemd-logind[2479]: New session 7 of user core. Jan 14 01:20:01.170638 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 01:20:01.464082 sshd[2912]: Connection closed by 10.200.16.10 port 41990 Jan 14 01:20:01.464640 sshd-session[2908]: pam_unix(sshd:session): session closed for user core Jan 14 01:20:01.467845 systemd[1]: sshd@3-10.200.4.7:22-10.200.16.10:41990.service: Deactivated successfully. Jan 14 01:20:01.468723 systemd-logind[2479]: Session 7 logged out. Waiting for processes to exit. Jan 14 01:20:01.469658 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 01:20:01.470869 systemd-logind[2479]: Removed session 7. Jan 14 01:20:01.578850 systemd[1]: Started sshd@4-10.200.4.7:22-10.200.16.10:42000.service - OpenSSH per-connection server daemon (10.200.16.10:42000). Jan 14 01:20:02.120522 sshd[2918]: Accepted publickey for core from 10.200.16.10 port 42000 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:20:02.121331 sshd-session[2918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:20:02.125590 systemd-logind[2479]: New session 8 of user core. Jan 14 01:20:02.136651 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 01:20:02.495660 sudo[2923]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 14 01:20:02.495905 sudo[2923]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:20:02.522218 sudo[2923]: pam_unix(sudo:session): session closed for user root Jan 14 01:20:02.623192 sshd[2922]: Connection closed by 10.200.16.10 port 42000 Jan 14 01:20:02.624738 sshd-session[2918]: pam_unix(sshd:session): session closed for user core Jan 14 01:20:02.628280 systemd-logind[2479]: Session 8 logged out. Waiting for processes to exit. Jan 14 01:20:02.628452 systemd[1]: sshd@4-10.200.4.7:22-10.200.16.10:42000.service: Deactivated successfully. Jan 14 01:20:02.630100 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 01:20:02.631336 systemd-logind[2479]: Removed session 8. Jan 14 01:20:02.733931 systemd[1]: Started sshd@5-10.200.4.7:22-10.200.16.10:42004.service - OpenSSH per-connection server daemon (10.200.16.10:42004). Jan 14 01:20:03.268238 sshd[2930]: Accepted publickey for core from 10.200.16.10 port 42004 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:20:03.269242 sshd-session[2930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:20:03.272844 systemd-logind[2479]: New session 9 of user core. Jan 14 01:20:03.280657 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 01:20:03.473373 sudo[2936]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 14 01:20:03.473644 sudo[2936]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:20:03.478880 sudo[2936]: pam_unix(sudo:session): session closed for user root Jan 14 01:20:03.483221 sudo[2935]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 14 01:20:03.483428 sudo[2935]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:20:03.489222 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 01:20:03.515000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 01:20:03.518061 kernel: kauditd_printk_skb: 160 callbacks suppressed Jan 14 01:20:03.518115 kernel: audit: type=1305 audit(1768353603.515:261): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 01:20:03.518199 kernel: audit: type=1300 audit(1768353603.515:261): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffed50e5b0 a2=420 a3=0 items=0 ppid=2941 pid=2960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:03.515000 audit[2960]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffed50e5b0 a2=420 a3=0 items=0 ppid=2941 pid=2960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:03.515000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 01:20:03.522662 augenrules[2960]: No rules Jan 14 01:20:03.522968 kernel: audit: type=1327 audit(1768353603.515:261): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 01:20:03.523675 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 01:20:03.523963 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 01:20:03.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:03.526298 sudo[2935]: pam_unix(sudo:session): session closed for user root Jan 14 01:20:03.526665 kernel: audit: type=1130 audit(1768353603.523:262): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:03.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:03.525000 audit[2935]: USER_END pid=2935 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:20:03.531073 kernel: audit: type=1131 audit(1768353603.523:263): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:03.531114 kernel: audit: type=1106 audit(1768353603.525:264): pid=2935 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:20:03.531132 kernel: audit: type=1104 audit(1768353603.525:265): pid=2935 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:20:03.525000 audit[2935]: CRED_DISP pid=2935 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:20:03.630396 sshd[2934]: Connection closed by 10.200.16.10 port 42004 Jan 14 01:20:03.631610 sshd-session[2930]: pam_unix(sshd:session): session closed for user core Jan 14 01:20:03.632000 audit[2930]: USER_END pid=2930 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:20:03.638035 kernel: audit: type=1106 audit(1768353603.632:266): pid=2930 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:20:03.638085 kernel: audit: type=1104 audit(1768353603.632:267): pid=2930 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:20:03.632000 audit[2930]: CRED_DISP pid=2930 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:20:03.637462 systemd[1]: sshd@5-10.200.4.7:22-10.200.16.10:42004.service: Deactivated successfully. Jan 14 01:20:03.641723 kernel: audit: type=1131 audit(1768353603.637:268): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.4.7:22-10.200.16.10:42004 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:03.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.4.7:22-10.200.16.10:42004 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:03.641054 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 01:20:03.642814 systemd-logind[2479]: Session 9 logged out. Waiting for processes to exit. Jan 14 01:20:03.643569 systemd-logind[2479]: Removed session 9. Jan 14 01:20:03.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.4.7:22-10.200.16.10:42018 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:03.745700 systemd[1]: Started sshd@6-10.200.4.7:22-10.200.16.10:42018.service - OpenSSH per-connection server daemon (10.200.16.10:42018). Jan 14 01:20:04.281000 audit[2969]: USER_ACCT pid=2969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:20:04.282097 sshd[2969]: Accepted publickey for core from 10.200.16.10 port 42018 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:20:04.282000 audit[2969]: CRED_ACQ pid=2969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:20:04.282000 audit[2969]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcf4e79650 a2=3 a3=0 items=0 ppid=1 pid=2969 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:04.282000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:20:04.283332 sshd-session[2969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:20:04.287375 systemd-logind[2479]: New session 10 of user core. Jan 14 01:20:04.297636 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 14 01:20:04.299000 audit[2969]: USER_START pid=2969 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:20:04.300000 audit[2973]: CRED_ACQ pid=2973 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:20:04.487000 audit[2974]: USER_ACCT pid=2974 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:20:04.487820 sudo[2974]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 01:20:04.487000 audit[2974]: CRED_REFR pid=2974 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:20:04.487000 audit[2974]: USER_START pid=2974 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:20:04.488045 sudo[2974]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:20:05.729673 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 14 01:20:05.730923 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:20:06.379260 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:20:06.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:06.390676 (kubelet)[2999]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:20:06.422095 kubelet[2999]: E0114 01:20:06.422039 2999 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:20:06.423397 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:20:06.423522 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:20:06.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:20:06.423929 systemd[1]: kubelet.service: Consumed 121ms CPU time, 110.2M memory peak. Jan 14 01:20:06.799811 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 14 01:20:06.815746 (dockerd)[3008]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 14 01:20:08.333815 dockerd[3008]: time="2026-01-14T01:20:08.333754450Z" level=info msg="Starting up" Jan 14 01:20:08.334581 dockerd[3008]: time="2026-01-14T01:20:08.334545905Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 14 01:20:08.344610 dockerd[3008]: time="2026-01-14T01:20:08.344580067Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 14 01:20:08.373779 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4269183104-merged.mount: Deactivated successfully. Jan 14 01:20:08.506041 dockerd[3008]: time="2026-01-14T01:20:08.506008181Z" level=info msg="Loading containers: start." Jan 14 01:20:08.533507 kernel: Initializing XFRM netlink socket Jan 14 01:20:08.555000 audit[3054]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=3054 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:08.557889 kernel: kauditd_printk_skb: 13 callbacks suppressed Jan 14 01:20:08.557925 kernel: audit: type=1325 audit(1768353608.555:280): table=nat:5 family=2 entries=2 op=nft_register_chain pid=3054 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:08.555000 audit[3054]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fffb33da5e0 a2=0 a3=0 items=0 ppid=3008 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.564011 kernel: audit: type=1300 audit(1768353608.555:280): arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fffb33da5e0 a2=0 a3=0 items=0 ppid=3008 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.565564 kernel: audit: type=1327 audit(1768353608.555:280): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 01:20:08.555000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 01:20:08.559000 audit[3056]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=3056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:08.569332 kernel: audit: type=1325 audit(1768353608.559:281): table=filter:6 family=2 entries=2 op=nft_register_chain pid=3056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:08.559000 audit[3056]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd2ff9a650 a2=0 a3=0 items=0 ppid=3008 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.573337 kernel: audit: type=1300 audit(1768353608.559:281): arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd2ff9a650 a2=0 a3=0 items=0 ppid=3008 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.559000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 01:20:08.576236 kernel: audit: type=1327 audit(1768353608.559:281): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 01:20:08.563000 audit[3058]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=3058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:08.578890 kernel: audit: type=1325 audit(1768353608.563:282): table=filter:7 family=2 entries=1 op=nft_register_chain pid=3058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:08.563000 audit[3058]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd0412f240 a2=0 a3=0 items=0 ppid=3008 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.583633 kernel: audit: type=1300 audit(1768353608.563:282): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd0412f240 a2=0 a3=0 items=0 ppid=3008 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.563000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 01:20:08.586790 kernel: audit: type=1327 audit(1768353608.563:282): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 01:20:08.567000 audit[3060]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=3060 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:08.589240 kernel: audit: type=1325 audit(1768353608.567:283): table=filter:8 family=2 entries=1 op=nft_register_chain pid=3060 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:08.567000 audit[3060]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff641275c0 a2=0 a3=0 items=0 ppid=3008 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.567000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 01:20:08.571000 audit[3062]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_chain pid=3062 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:08.571000 audit[3062]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe883cb300 a2=0 a3=0 items=0 ppid=3008 pid=3062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.571000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 01:20:08.574000 audit[3064]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_chain pid=3064 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:08.574000 audit[3064]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc745df110 a2=0 a3=0 items=0 ppid=3008 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.574000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:20:08.588000 audit[3066]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=3066 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:08.588000 audit[3066]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe3616feb0 a2=0 a3=0 items=0 ppid=3008 pid=3066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.588000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 01:20:08.590000 audit[3068]: NETFILTER_CFG table=nat:12 family=2 entries=2 op=nft_register_chain pid=3068 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:08.590000 audit[3068]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe990af480 a2=0 a3=0 items=0 ppid=3008 pid=3068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.590000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 01:20:08.639000 audit[3071]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=3071 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:08.639000 audit[3071]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffec184f730 a2=0 a3=0 items=0 ppid=3008 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.639000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 14 01:20:08.641000 audit[3073]: NETFILTER_CFG table=filter:14 family=2 entries=2 op=nft_register_chain pid=3073 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:08.641000 audit[3073]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff7d3ae4a0 a2=0 a3=0 items=0 ppid=3008 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.641000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 01:20:08.642000 audit[3075]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=3075 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:08.642000 audit[3075]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffc80bc8540 a2=0 a3=0 items=0 ppid=3008 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.642000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 01:20:08.644000 audit[3077]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=3077 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:08.644000 audit[3077]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffdb06bb750 a2=0 a3=0 items=0 ppid=3008 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.644000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:20:08.646000 audit[3079]: NETFILTER_CFG table=filter:17 family=2 entries=1 op=nft_register_rule pid=3079 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:08.646000 audit[3079]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffff646fb50 a2=0 a3=0 items=0 ppid=3008 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.646000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 01:20:08.706000 audit[3109]: NETFILTER_CFG table=nat:18 family=10 entries=2 op=nft_register_chain pid=3109 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:08.706000 audit[3109]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffefc9d54b0 a2=0 a3=0 items=0 ppid=3008 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.706000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 01:20:08.707000 audit[3111]: NETFILTER_CFG table=filter:19 family=10 entries=2 op=nft_register_chain pid=3111 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:08.707000 audit[3111]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe3a085810 a2=0 a3=0 items=0 ppid=3008 pid=3111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.707000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 01:20:08.709000 audit[3113]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=3113 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:08.709000 audit[3113]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd0654c160 a2=0 a3=0 items=0 ppid=3008 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.709000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 01:20:08.710000 audit[3115]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=3115 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:08.710000 audit[3115]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe25769510 a2=0 a3=0 items=0 ppid=3008 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.710000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 01:20:08.712000 audit[3117]: NETFILTER_CFG table=filter:22 family=10 entries=1 op=nft_register_chain pid=3117 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:08.712000 audit[3117]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd84697d50 a2=0 a3=0 items=0 ppid=3008 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.712000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 01:20:08.713000 audit[3119]: NETFILTER_CFG table=filter:23 family=10 entries=1 op=nft_register_chain pid=3119 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:08.713000 audit[3119]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd38d1bfe0 a2=0 a3=0 items=0 ppid=3008 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.713000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:20:08.715000 audit[3121]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_chain pid=3121 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:08.715000 audit[3121]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd19dcda50 a2=0 a3=0 items=0 ppid=3008 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.715000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 01:20:08.716000 audit[3123]: NETFILTER_CFG table=nat:25 family=10 entries=2 op=nft_register_chain pid=3123 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:08.716000 audit[3123]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffd791c6970 a2=0 a3=0 items=0 ppid=3008 pid=3123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.716000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 01:20:08.718000 audit[3125]: NETFILTER_CFG table=nat:26 family=10 entries=2 op=nft_register_chain pid=3125 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:08.718000 audit[3125]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffd182d0010 a2=0 a3=0 items=0 ppid=3008 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.718000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 14 01:20:08.720000 audit[3127]: NETFILTER_CFG table=filter:27 family=10 entries=2 op=nft_register_chain pid=3127 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:08.720000 audit[3127]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffcd00ed050 a2=0 a3=0 items=0 ppid=3008 pid=3127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.720000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 01:20:08.722000 audit[3129]: NETFILTER_CFG table=filter:28 family=10 entries=1 op=nft_register_rule pid=3129 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:08.722000 audit[3129]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffc19d87a30 a2=0 a3=0 items=0 ppid=3008 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.722000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 01:20:08.723000 audit[3131]: NETFILTER_CFG table=filter:29 family=10 entries=1 op=nft_register_rule pid=3131 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:08.723000 audit[3131]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc75be5af0 a2=0 a3=0 items=0 ppid=3008 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.723000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:20:08.725000 audit[3133]: NETFILTER_CFG table=filter:30 family=10 entries=1 op=nft_register_rule pid=3133 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:08.725000 audit[3133]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffd531b77a0 a2=0 a3=0 items=0 ppid=3008 pid=3133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.725000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 01:20:08.729000 audit[3138]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_chain pid=3138 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:08.729000 audit[3138]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff0ff410b0 a2=0 a3=0 items=0 ppid=3008 pid=3138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.729000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 01:20:08.731000 audit[3140]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=3140 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:08.731000 audit[3140]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd65de7320 a2=0 a3=0 items=0 ppid=3008 pid=3140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.731000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 01:20:08.732000 audit[3142]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=3142 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:08.732000 audit[3142]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc0321c740 a2=0 a3=0 items=0 ppid=3008 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.732000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 01:20:08.734000 audit[3144]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=3144 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:08.734000 audit[3144]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe4291a380 a2=0 a3=0 items=0 ppid=3008 pid=3144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.734000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 01:20:08.736000 audit[3146]: NETFILTER_CFG table=filter:35 family=10 entries=1 op=nft_register_rule pid=3146 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:08.736000 audit[3146]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff76ff1a70 a2=0 a3=0 items=0 ppid=3008 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.736000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 01:20:08.737000 audit[3148]: NETFILTER_CFG table=filter:36 family=10 entries=1 op=nft_register_rule pid=3148 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:08.737000 audit[3148]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffdc00dd3c0 a2=0 a3=0 items=0 ppid=3008 pid=3148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.737000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 01:20:08.800000 audit[3153]: NETFILTER_CFG table=nat:37 family=2 entries=2 op=nft_register_chain pid=3153 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:08.800000 audit[3153]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7fff9eae07e0 a2=0 a3=0 items=0 ppid=3008 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.800000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 14 01:20:08.802000 audit[3155]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=3155 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:08.802000 audit[3155]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fff95398710 a2=0 a3=0 items=0 ppid=3008 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.802000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 14 01:20:08.809000 audit[3163]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=3163 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:08.809000 audit[3163]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffc7966f790 a2=0 a3=0 items=0 ppid=3008 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.809000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 14 01:20:08.813000 audit[3168]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=3168 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:08.813000 audit[3168]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffcb2547bb0 a2=0 a3=0 items=0 ppid=3008 pid=3168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.813000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 14 01:20:08.815000 audit[3170]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=3170 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:08.815000 audit[3170]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7fffa9d62020 a2=0 a3=0 items=0 ppid=3008 pid=3170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.815000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 14 01:20:08.817000 audit[3172]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=3172 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:08.817000 audit[3172]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcfe095960 a2=0 a3=0 items=0 ppid=3008 pid=3172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.817000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 14 01:20:08.818000 audit[3174]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_rule pid=3174 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:08.818000 audit[3174]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffe2a3d8510 a2=0 a3=0 items=0 ppid=3008 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.818000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 01:20:08.820000 audit[3176]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_rule pid=3176 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:08.820000 audit[3176]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc08b8c270 a2=0 a3=0 items=0 ppid=3008 pid=3176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:08.820000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 14 01:20:08.822187 systemd-networkd[2133]: docker0: Link UP Jan 14 01:20:08.838942 dockerd[3008]: time="2026-01-14T01:20:08.838707290Z" level=info msg="Loading containers: done." Jan 14 01:20:08.894830 dockerd[3008]: time="2026-01-14T01:20:08.894802151Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 14 01:20:08.894943 dockerd[3008]: time="2026-01-14T01:20:08.894874997Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 14 01:20:08.894968 dockerd[3008]: time="2026-01-14T01:20:08.894943587Z" level=info msg="Initializing buildkit" Jan 14 01:20:08.944017 dockerd[3008]: time="2026-01-14T01:20:08.943993371Z" level=info msg="Completed buildkit initialization" Jan 14 01:20:08.950159 dockerd[3008]: time="2026-01-14T01:20:08.950119262Z" level=info msg="Daemon has completed initialization" Jan 14 01:20:08.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:08.950837 dockerd[3008]: time="2026-01-14T01:20:08.950213236Z" level=info msg="API listen on /run/docker.sock" Jan 14 01:20:08.950398 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 14 01:20:10.039991 containerd[2507]: time="2026-01-14T01:20:10.039512574Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 14 01:20:11.052639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount36789157.mount: Deactivated successfully. Jan 14 01:20:12.053215 containerd[2507]: time="2026-01-14T01:20:12.053172031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:20:12.056246 containerd[2507]: time="2026-01-14T01:20:12.056212183Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=25988189" Jan 14 01:20:12.059988 containerd[2507]: time="2026-01-14T01:20:12.059951733Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:20:12.071117 containerd[2507]: time="2026-01-14T01:20:12.070904694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:20:12.071582 containerd[2507]: time="2026-01-14T01:20:12.071558105Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 2.032003979s" Jan 14 01:20:12.071629 containerd[2507]: time="2026-01-14T01:20:12.071592425Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 14 01:20:12.072309 containerd[2507]: time="2026-01-14T01:20:12.072288346Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 14 01:20:13.408962 containerd[2507]: time="2026-01-14T01:20:13.408917823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:20:13.412719 containerd[2507]: time="2026-01-14T01:20:13.412601933Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=0" Jan 14 01:20:13.416212 containerd[2507]: time="2026-01-14T01:20:13.416189105Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:20:13.425835 containerd[2507]: time="2026-01-14T01:20:13.425806263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:20:13.426659 containerd[2507]: time="2026-01-14T01:20:13.426509102Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 1.354184343s" Jan 14 01:20:13.426659 containerd[2507]: time="2026-01-14T01:20:13.426540420Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 14 01:20:13.427116 containerd[2507]: time="2026-01-14T01:20:13.427091551Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 14 01:20:13.614412 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jan 14 01:20:14.658499 containerd[2507]: time="2026-01-14T01:20:14.658441260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:20:14.664006 containerd[2507]: time="2026-01-14T01:20:14.663880344Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=0" Jan 14 01:20:14.666967 containerd[2507]: time="2026-01-14T01:20:14.666946394Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:20:14.671356 containerd[2507]: time="2026-01-14T01:20:14.671329995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:20:14.671996 containerd[2507]: time="2026-01-14T01:20:14.671890939Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 1.24477028s" Jan 14 01:20:14.671996 containerd[2507]: time="2026-01-14T01:20:14.671917000Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 14 01:20:14.672585 containerd[2507]: time="2026-01-14T01:20:14.672557379Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 14 01:20:15.583976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4236051405.mount: Deactivated successfully. Jan 14 01:20:15.851100 containerd[2507]: time="2026-01-14T01:20:15.850990134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:20:15.853493 containerd[2507]: time="2026-01-14T01:20:15.853378163Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=14375786" Jan 14 01:20:15.855867 containerd[2507]: time="2026-01-14T01:20:15.855843296Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:20:15.867820 containerd[2507]: time="2026-01-14T01:20:15.867779073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:20:15.868154 containerd[2507]: time="2026-01-14T01:20:15.868039930Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.19545823s" Jan 14 01:20:15.868154 containerd[2507]: time="2026-01-14T01:20:15.868069111Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 14 01:20:15.868602 containerd[2507]: time="2026-01-14T01:20:15.868579941Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 14 01:20:16.479684 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 14 01:20:16.481170 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:20:16.937477 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:20:16.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:16.938697 kernel: kauditd_printk_skb: 111 callbacks suppressed Jan 14 01:20:16.938792 kernel: audit: type=1130 audit(1768353616.937:321): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:16.943398 (kubelet)[3301]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:20:16.973852 kubelet[3301]: E0114 01:20:16.973816 3301 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:20:16.975193 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:20:16.975315 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:20:16.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:20:16.975647 systemd[1]: kubelet.service: Consumed 126ms CPU time, 110.2M memory peak. Jan 14 01:20:16.980504 kernel: audit: type=1131 audit(1768353616.975:322): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:20:17.098647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2377004296.mount: Deactivated successfully. Jan 14 01:20:17.886093 update_engine[2480]: I20260114 01:20:17.886036 2480 update_attempter.cc:509] Updating boot flags... Jan 14 01:20:19.253139 waagent[2704]: 2026-01-14T01:20:19.253097Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 14 01:20:19.264256 waagent[2704]: 2026-01-14T01:20:19.264217Z INFO ExtHandler Jan 14 01:20:19.264355 waagent[2704]: 2026-01-14T01:20:19.264309Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: ba4a319c-0c10-4913-8bf5-3f289e061082 eTag: 2101404417298253881 source: Fabric] Jan 14 01:20:19.264616 waagent[2704]: 2026-01-14T01:20:19.264586Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 14 01:20:19.265006 waagent[2704]: 2026-01-14T01:20:19.264976Z INFO ExtHandler Jan 14 01:20:19.265055 waagent[2704]: 2026-01-14T01:20:19.265028Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 14 01:20:19.336312 waagent[2704]: 2026-01-14T01:20:19.336277Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 14 01:20:19.474383 waagent[2704]: 2026-01-14T01:20:19.474333Z INFO ExtHandler Downloaded certificate {'thumbprint': 'FC7C7F0F7FB4622865C49C00A1B340A7F166B66A', 'hasPrivateKey': True} Jan 14 01:20:19.474740 waagent[2704]: 2026-01-14T01:20:19.474711Z INFO ExtHandler Fetch goal state completed Jan 14 01:20:19.474981 waagent[2704]: 2026-01-14T01:20:19.474957Z INFO ExtHandler ExtHandler Jan 14 01:20:19.475027 waagent[2704]: 2026-01-14T01:20:19.475007Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: edd7d6b0-75e5-4552-997e-a4b84da8fa22 correlation d5ad58ac-14d4-4e14-89b6-dec85af8428f created: 2026-01-14T01:20:13.497871Z] Jan 14 01:20:19.475228 waagent[2704]: 2026-01-14T01:20:19.475207Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 14 01:20:19.475638 waagent[2704]: 2026-01-14T01:20:19.475615Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 0 ms] Jan 14 01:20:26.979712 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 14 01:20:26.981589 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:20:27.465409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:20:27.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:27.471511 kernel: audit: type=1130 audit(1768353627.464:323): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:27.478765 (kubelet)[3403]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:20:27.509412 kubelet[3403]: E0114 01:20:27.509360 3403 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:20:27.510744 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:20:27.510862 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:20:27.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:20:27.511203 systemd[1]: kubelet.service: Consumed 115ms CPU time, 108.3M memory peak. Jan 14 01:20:27.514534 kernel: audit: type=1131 audit(1768353627.509:324): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:20:27.957866 containerd[2507]: time="2026-01-14T01:20:27.957826447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:20:27.961624 containerd[2507]: time="2026-01-14T01:20:27.961449894Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=21905911" Jan 14 01:20:27.965229 containerd[2507]: time="2026-01-14T01:20:27.965203866Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:20:27.969076 containerd[2507]: time="2026-01-14T01:20:27.969049980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:20:27.969737 containerd[2507]: time="2026-01-14T01:20:27.969714786Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 12.101108004s" Jan 14 01:20:27.969816 containerd[2507]: time="2026-01-14T01:20:27.969805046Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 14 01:20:27.970301 containerd[2507]: time="2026-01-14T01:20:27.970273986Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 14 01:20:28.529404 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2075064270.mount: Deactivated successfully. Jan 14 01:20:28.557124 containerd[2507]: time="2026-01-14T01:20:28.557085785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:20:28.559822 containerd[2507]: time="2026-01-14T01:20:28.559690563Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Jan 14 01:20:28.566187 containerd[2507]: time="2026-01-14T01:20:28.566163886Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:20:28.571191 containerd[2507]: time="2026-01-14T01:20:28.570555916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:20:28.571191 containerd[2507]: time="2026-01-14T01:20:28.571064880Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 600.763108ms" Jan 14 01:20:28.571191 containerd[2507]: time="2026-01-14T01:20:28.571090752Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 14 01:20:28.571760 containerd[2507]: time="2026-01-14T01:20:28.571739532Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 14 01:20:29.316301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3024286197.mount: Deactivated successfully. Jan 14 01:20:31.671321 containerd[2507]: time="2026-01-14T01:20:31.671278859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:20:31.674086 containerd[2507]: time="2026-01-14T01:20:31.673923250Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=72348001" Jan 14 01:20:31.677838 containerd[2507]: time="2026-01-14T01:20:31.677813241Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:20:31.682476 containerd[2507]: time="2026-01-14T01:20:31.682452754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:20:31.683305 containerd[2507]: time="2026-01-14T01:20:31.683193177Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.11142878s" Jan 14 01:20:31.683305 containerd[2507]: time="2026-01-14T01:20:31.683218962Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 14 01:20:34.655224 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:20:34.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:34.655774 systemd[1]: kubelet.service: Consumed 115ms CPU time, 108.3M memory peak. Jan 14 01:20:34.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:34.661174 kernel: audit: type=1130 audit(1768353634.654:325): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:34.661266 kernel: audit: type=1131 audit(1768353634.654:326): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:34.662583 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:20:34.688453 systemd[1]: Reload requested from client PID 3500 ('systemctl') (unit session-10.scope)... Jan 14 01:20:34.688621 systemd[1]: Reloading... Jan 14 01:20:34.764521 zram_generator::config[3547]: No configuration found. Jan 14 01:20:34.953842 systemd[1]: Reloading finished in 264 ms. Jan 14 01:20:35.012165 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 14 01:20:35.012224 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 14 01:20:35.012579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:20:35.012635 systemd[1]: kubelet.service: Consumed 74ms CPU time, 78M memory peak. Jan 14 01:20:35.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:20:35.016473 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:20:35.016565 kernel: audit: type=1130 audit(1768353635.011:327): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:20:35.016000 audit: BPF prog-id=87 op=LOAD Jan 14 01:20:35.016000 audit: BPF prog-id=86 op=UNLOAD Jan 14 01:20:35.021207 kernel: audit: type=1334 audit(1768353635.016:328): prog-id=87 op=LOAD Jan 14 01:20:35.021250 kernel: audit: type=1334 audit(1768353635.016:329): prog-id=86 op=UNLOAD Jan 14 01:20:35.019000 audit: BPF prog-id=88 op=LOAD Jan 14 01:20:35.030954 kernel: audit: type=1334 audit(1768353635.019:330): prog-id=88 op=LOAD Jan 14 01:20:35.030000 audit: BPF prog-id=70 op=UNLOAD Jan 14 01:20:35.030000 audit: BPF prog-id=89 op=LOAD Jan 14 01:20:35.037534 kernel: audit: type=1334 audit(1768353635.030:331): prog-id=70 op=UNLOAD Jan 14 01:20:35.037682 kernel: audit: type=1334 audit(1768353635.030:332): prog-id=89 op=LOAD Jan 14 01:20:35.037804 kernel: audit: type=1334 audit(1768353635.030:333): prog-id=90 op=LOAD Jan 14 01:20:35.037873 kernel: audit: type=1334 audit(1768353635.030:334): prog-id=71 op=UNLOAD Jan 14 01:20:35.030000 audit: BPF prog-id=90 op=LOAD Jan 14 01:20:35.030000 audit: BPF prog-id=71 op=UNLOAD Jan 14 01:20:35.030000 audit: BPF prog-id=72 op=UNLOAD Jan 14 01:20:35.030000 audit: BPF prog-id=91 op=LOAD Jan 14 01:20:35.030000 audit: BPF prog-id=76 op=UNLOAD Jan 14 01:20:35.031000 audit: BPF prog-id=92 op=LOAD Jan 14 01:20:35.031000 audit: BPF prog-id=67 op=UNLOAD Jan 14 01:20:35.031000 audit: BPF prog-id=93 op=LOAD Jan 14 01:20:35.031000 audit: BPF prog-id=94 op=LOAD Jan 14 01:20:35.031000 audit: BPF prog-id=68 op=UNLOAD Jan 14 01:20:35.031000 audit: BPF prog-id=69 op=UNLOAD Jan 14 01:20:35.035000 audit: BPF prog-id=95 op=LOAD Jan 14 01:20:35.035000 audit: BPF prog-id=96 op=LOAD Jan 14 01:20:35.035000 audit: BPF prog-id=84 op=UNLOAD Jan 14 01:20:35.035000 audit: BPF prog-id=85 op=UNLOAD Jan 14 01:20:35.037000 audit: BPF prog-id=97 op=LOAD Jan 14 01:20:35.037000 audit: BPF prog-id=73 op=UNLOAD Jan 14 01:20:35.037000 audit: BPF prog-id=98 op=LOAD Jan 14 01:20:35.037000 audit: BPF prog-id=99 op=LOAD Jan 14 01:20:35.037000 audit: BPF prog-id=74 op=UNLOAD Jan 14 01:20:35.037000 audit: BPF prog-id=75 op=UNLOAD Jan 14 01:20:35.037000 audit: BPF prog-id=100 op=LOAD Jan 14 01:20:35.037000 audit: BPF prog-id=78 op=UNLOAD Jan 14 01:20:35.037000 audit: BPF prog-id=101 op=LOAD Jan 14 01:20:35.037000 audit: BPF prog-id=102 op=LOAD Jan 14 01:20:35.037000 audit: BPF prog-id=79 op=UNLOAD Jan 14 01:20:35.037000 audit: BPF prog-id=80 op=UNLOAD Jan 14 01:20:35.038000 audit: BPF prog-id=103 op=LOAD Jan 14 01:20:35.038000 audit: BPF prog-id=81 op=UNLOAD Jan 14 01:20:35.038000 audit: BPF prog-id=104 op=LOAD Jan 14 01:20:35.038000 audit: BPF prog-id=105 op=LOAD Jan 14 01:20:35.038000 audit: BPF prog-id=82 op=UNLOAD Jan 14 01:20:35.038000 audit: BPF prog-id=83 op=UNLOAD Jan 14 01:20:35.039000 audit: BPF prog-id=106 op=LOAD Jan 14 01:20:35.039000 audit: BPF prog-id=77 op=UNLOAD Jan 14 01:20:35.499408 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:20:35.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:35.509866 (kubelet)[3617]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 01:20:35.541406 kubelet[3617]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 01:20:35.541672 kubelet[3617]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:20:35.541831 kubelet[3617]: I0114 01:20:35.541809 3617 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 01:20:35.830047 kubelet[3617]: I0114 01:20:35.829968 3617 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 14 01:20:35.830047 kubelet[3617]: I0114 01:20:35.829988 3617 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 01:20:35.830047 kubelet[3617]: I0114 01:20:35.830010 3617 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 14 01:20:35.830047 kubelet[3617]: I0114 01:20:35.830016 3617 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 01:20:35.830396 kubelet[3617]: I0114 01:20:35.830208 3617 server.go:956] "Client rotation is on, will bootstrap in background" Jan 14 01:20:35.841184 kubelet[3617]: E0114 01:20:35.841136 3617 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.4.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.7:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 14 01:20:35.841286 kubelet[3617]: I0114 01:20:35.841258 3617 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 01:20:35.845295 kubelet[3617]: I0114 01:20:35.844510 3617 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 01:20:35.846479 kubelet[3617]: I0114 01:20:35.846462 3617 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 14 01:20:35.847738 kubelet[3617]: I0114 01:20:35.847704 3617 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 01:20:35.847871 kubelet[3617]: I0114 01:20:35.847735 3617 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4578.0.0-p-9807086b3c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 01:20:35.847968 kubelet[3617]: I0114 01:20:35.847874 3617 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 01:20:35.847968 kubelet[3617]: I0114 01:20:35.847883 3617 container_manager_linux.go:306] "Creating device plugin manager" Jan 14 01:20:35.847968 kubelet[3617]: I0114 01:20:35.847955 3617 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 14 01:20:35.854498 kubelet[3617]: I0114 01:20:35.854474 3617 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:20:35.854620 kubelet[3617]: I0114 01:20:35.854611 3617 kubelet.go:475] "Attempting to sync node with API server" Jan 14 01:20:35.854648 kubelet[3617]: I0114 01:20:35.854623 3617 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 01:20:35.854648 kubelet[3617]: I0114 01:20:35.854641 3617 kubelet.go:387] "Adding apiserver pod source" Jan 14 01:20:35.854692 kubelet[3617]: I0114 01:20:35.854659 3617 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 01:20:35.858004 kubelet[3617]: E0114 01:20:35.857592 3617 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.4.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 14 01:20:35.858004 kubelet[3617]: E0114 01:20:35.857793 3617 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.4.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4578.0.0-p-9807086b3c&limit=500&resourceVersion=0\": dial tcp 10.200.4.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 14 01:20:35.858101 kubelet[3617]: I0114 01:20:35.858079 3617 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 01:20:35.858543 kubelet[3617]: I0114 01:20:35.858529 3617 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 14 01:20:35.858580 kubelet[3617]: I0114 01:20:35.858559 3617 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 14 01:20:35.858607 kubelet[3617]: W0114 01:20:35.858598 3617 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 01:20:35.861901 kubelet[3617]: I0114 01:20:35.861888 3617 server.go:1262] "Started kubelet" Jan 14 01:20:35.862455 kubelet[3617]: I0114 01:20:35.862432 3617 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 01:20:35.865000 audit[3630]: NETFILTER_CFG table=mangle:45 family=10 entries=2 op=nft_register_chain pid=3630 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:35.865000 audit[3630]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd0a737b70 a2=0 a3=0 items=0 ppid=3617 pid=3630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:35.865000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 01:20:35.867711 kubelet[3617]: E0114 01:20:35.865444 3617 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.7:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.7:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4578.0.0-p-9807086b3c.188a74433ed11774 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4578.0.0-p-9807086b3c,UID:ci-4578.0.0-p-9807086b3c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4578.0.0-p-9807086b3c,},FirstTimestamp:2026-01-14 01:20:35.86186226 +0000 UTC m=+0.348936260,LastTimestamp:2026-01-14 01:20:35.86186226 +0000 UTC m=+0.348936260,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4578.0.0-p-9807086b3c,}" Jan 14 01:20:35.867873 kubelet[3617]: I0114 01:20:35.867853 3617 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 14 01:20:35.868063 kubelet[3617]: I0114 01:20:35.868039 3617 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 01:20:35.867000 audit[3631]: NETFILTER_CFG table=mangle:46 family=2 entries=2 op=nft_register_chain pid=3631 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:35.867000 audit[3631]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe0838c140 a2=0 a3=0 items=0 ppid=3617 pid=3631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:35.867000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 01:20:35.869793 kubelet[3617]: I0114 01:20:35.869779 3617 server.go:310] "Adding debug handlers to kubelet server" Jan 14 01:20:35.868000 audit[3632]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=3632 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:35.868000 audit[3632]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe7f073230 a2=0 a3=0 items=0 ppid=3617 pid=3632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:35.868000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 01:20:35.871094 kubelet[3617]: I0114 01:20:35.871072 3617 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 14 01:20:35.871281 kubelet[3617]: E0114 01:20:35.871264 3617 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4578.0.0-p-9807086b3c\" not found" Jan 14 01:20:35.870000 audit[3635]: NETFILTER_CFG table=mangle:48 family=10 entries=1 op=nft_register_chain pid=3635 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:35.870000 audit[3635]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff06b670d0 a2=0 a3=0 items=0 ppid=3617 pid=3635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:35.870000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 01:20:35.871000 audit[3637]: NETFILTER_CFG table=nat:49 family=10 entries=1 op=nft_register_chain pid=3637 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:35.871000 audit[3637]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcde40a8e0 a2=0 a3=0 items=0 ppid=3617 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:35.871000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 01:20:35.873456 kubelet[3617]: I0114 01:20:35.873093 3617 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 01:20:35.873456 kubelet[3617]: I0114 01:20:35.873134 3617 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 14 01:20:35.873456 kubelet[3617]: I0114 01:20:35.873262 3617 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 01:20:35.873766 kubelet[3617]: I0114 01:20:35.873753 3617 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 01:20:35.872000 audit[3638]: NETFILTER_CFG table=filter:50 family=10 entries=1 op=nft_register_chain pid=3638 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:35.872000 audit[3638]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff27246a50 a2=0 a3=0 items=0 ppid=3617 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:35.872000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 01:20:35.873000 audit[3639]: NETFILTER_CFG table=filter:51 family=2 entries=2 op=nft_register_chain pid=3639 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:35.873000 audit[3639]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffdd3efa620 a2=0 a3=0 items=0 ppid=3617 pid=3639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:35.873000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:20:35.876569 kubelet[3617]: E0114 01:20:35.875844 3617 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4578.0.0-p-9807086b3c?timeout=10s\": dial tcp 10.200.4.7:6443: connect: connection refused" interval="200ms" Jan 14 01:20:35.877384 kubelet[3617]: I0114 01:20:35.877371 3617 reconciler.go:29] "Reconciler: start to sync state" Jan 14 01:20:35.878036 kubelet[3617]: I0114 01:20:35.878020 3617 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 14 01:20:35.876000 audit[3641]: NETFILTER_CFG table=filter:52 family=2 entries=2 op=nft_register_chain pid=3641 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:35.876000 audit[3641]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe99dd3d80 a2=0 a3=0 items=0 ppid=3617 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:35.876000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:20:35.878434 kubelet[3617]: E0114 01:20:35.878422 3617 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 01:20:35.879241 kubelet[3617]: E0114 01:20:35.879221 3617 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.4.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 01:20:35.879639 kubelet[3617]: I0114 01:20:35.879626 3617 factory.go:223] Registration of the systemd container factory successfully Jan 14 01:20:35.879790 kubelet[3617]: I0114 01:20:35.879763 3617 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 01:20:35.881287 kubelet[3617]: I0114 01:20:35.881270 3617 factory.go:223] Registration of the containerd container factory successfully Jan 14 01:20:35.904000 audit[3644]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=3644 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:35.904000 audit[3644]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fffebed8e70 a2=0 a3=0 items=0 ppid=3617 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:35.904000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Jan 14 01:20:35.906334 kubelet[3617]: I0114 01:20:35.906139 3617 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 14 01:20:35.906334 kubelet[3617]: I0114 01:20:35.906153 3617 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 14 01:20:35.906334 kubelet[3617]: I0114 01:20:35.906171 3617 kubelet.go:2427] "Starting kubelet main sync loop" Jan 14 01:20:35.906334 kubelet[3617]: E0114 01:20:35.906199 3617 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 01:20:35.905000 audit[3646]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3646 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:35.905000 audit[3646]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffebd4543a0 a2=0 a3=0 items=0 ppid=3617 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:35.905000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 01:20:35.906000 audit[3647]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=3647 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:35.906000 audit[3647]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffded2910c0 a2=0 a3=0 items=0 ppid=3617 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:35.906000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 01:20:35.908268 kubelet[3617]: I0114 01:20:35.908256 3617 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 01:20:35.908309 kubelet[3617]: I0114 01:20:35.908269 3617 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 01:20:35.908309 kubelet[3617]: I0114 01:20:35.908282 3617 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:20:35.907000 audit[3651]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_chain pid=3651 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:35.907000 audit[3651]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc9d1bf660 a2=0 a3=0 items=0 ppid=3617 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:35.907000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 01:20:35.910141 kubelet[3617]: E0114 01:20:35.910095 3617 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.4.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 14 01:20:35.916140 kubelet[3617]: I0114 01:20:35.916126 3617 policy_none.go:49] "None policy: Start" Jan 14 01:20:35.916140 kubelet[3617]: I0114 01:20:35.916141 3617 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 14 01:20:35.916220 kubelet[3617]: I0114 01:20:35.916151 3617 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 14 01:20:35.920601 kubelet[3617]: I0114 01:20:35.920589 3617 policy_none.go:47] "Start" Jan 14 01:20:35.923784 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 14 01:20:35.934357 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 14 01:20:35.936999 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 14 01:20:35.943999 kubelet[3617]: E0114 01:20:35.943978 3617 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 14 01:20:35.944117 kubelet[3617]: I0114 01:20:35.944105 3617 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 01:20:35.944152 kubelet[3617]: I0114 01:20:35.944115 3617 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 01:20:35.945260 kubelet[3617]: I0114 01:20:35.945133 3617 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 01:20:35.946105 kubelet[3617]: E0114 01:20:35.946091 3617 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 01:20:35.946228 kubelet[3617]: E0114 01:20:35.946128 3617 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4578.0.0-p-9807086b3c\" not found" Jan 14 01:20:36.016228 systemd[1]: Created slice kubepods-burstable-podf18f777686a51c9628d11da799ae90ff.slice - libcontainer container kubepods-burstable-podf18f777686a51c9628d11da799ae90ff.slice. Jan 14 01:20:36.024389 kubelet[3617]: E0114 01:20:36.024340 3617 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-9807086b3c\" not found" node="ci-4578.0.0-p-9807086b3c" Jan 14 01:20:36.027874 systemd[1]: Created slice kubepods-burstable-podc710b6d3d992c9c1884dfc386dd6b4cb.slice - libcontainer container kubepods-burstable-podc710b6d3d992c9c1884dfc386dd6b4cb.slice. Jan 14 01:20:36.036339 kubelet[3617]: E0114 01:20:36.036313 3617 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-9807086b3c\" not found" node="ci-4578.0.0-p-9807086b3c" Jan 14 01:20:36.038282 systemd[1]: Created slice kubepods-burstable-pod0e46da19186e4716a371ed10c9203539.slice - libcontainer container kubepods-burstable-pod0e46da19186e4716a371ed10c9203539.slice. Jan 14 01:20:36.039745 kubelet[3617]: E0114 01:20:36.039723 3617 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-9807086b3c\" not found" node="ci-4578.0.0-p-9807086b3c" Jan 14 01:20:36.045672 kubelet[3617]: I0114 01:20:36.045660 3617 kubelet_node_status.go:75] "Attempting to register node" node="ci-4578.0.0-p-9807086b3c" Jan 14 01:20:36.046036 kubelet[3617]: E0114 01:20:36.046019 3617 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.7:6443/api/v1/nodes\": dial tcp 10.200.4.7:6443: connect: connection refused" node="ci-4578.0.0-p-9807086b3c" Jan 14 01:20:36.076510 kubelet[3617]: E0114 01:20:36.076472 3617 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4578.0.0-p-9807086b3c?timeout=10s\": dial tcp 10.200.4.7:6443: connect: connection refused" interval="400ms" Jan 14 01:20:36.078822 kubelet[3617]: I0114 01:20:36.078642 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0e46da19186e4716a371ed10c9203539-kubeconfig\") pod \"kube-scheduler-ci-4578.0.0-p-9807086b3c\" (UID: \"0e46da19186e4716a371ed10c9203539\") " pod="kube-system/kube-scheduler-ci-4578.0.0-p-9807086b3c" Jan 14 01:20:36.078822 kubelet[3617]: I0114 01:20:36.078670 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f18f777686a51c9628d11da799ae90ff-k8s-certs\") pod \"kube-apiserver-ci-4578.0.0-p-9807086b3c\" (UID: \"f18f777686a51c9628d11da799ae90ff\") " pod="kube-system/kube-apiserver-ci-4578.0.0-p-9807086b3c" Jan 14 01:20:36.078822 kubelet[3617]: I0114 01:20:36.078683 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f18f777686a51c9628d11da799ae90ff-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4578.0.0-p-9807086b3c\" (UID: \"f18f777686a51c9628d11da799ae90ff\") " pod="kube-system/kube-apiserver-ci-4578.0.0-p-9807086b3c" Jan 14 01:20:36.078822 kubelet[3617]: I0114 01:20:36.078695 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c710b6d3d992c9c1884dfc386dd6b4cb-flexvolume-dir\") pod \"kube-controller-manager-ci-4578.0.0-p-9807086b3c\" (UID: \"c710b6d3d992c9c1884dfc386dd6b4cb\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-9807086b3c" Jan 14 01:20:36.078822 kubelet[3617]: I0114 01:20:36.078707 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c710b6d3d992c9c1884dfc386dd6b4cb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4578.0.0-p-9807086b3c\" (UID: \"c710b6d3d992c9c1884dfc386dd6b4cb\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-9807086b3c" Jan 14 01:20:36.078925 kubelet[3617]: I0114 01:20:36.078718 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f18f777686a51c9628d11da799ae90ff-ca-certs\") pod \"kube-apiserver-ci-4578.0.0-p-9807086b3c\" (UID: \"f18f777686a51c9628d11da799ae90ff\") " pod="kube-system/kube-apiserver-ci-4578.0.0-p-9807086b3c" Jan 14 01:20:36.078925 kubelet[3617]: I0114 01:20:36.078732 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c710b6d3d992c9c1884dfc386dd6b4cb-ca-certs\") pod \"kube-controller-manager-ci-4578.0.0-p-9807086b3c\" (UID: \"c710b6d3d992c9c1884dfc386dd6b4cb\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-9807086b3c" Jan 14 01:20:36.078925 kubelet[3617]: I0114 01:20:36.078757 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c710b6d3d992c9c1884dfc386dd6b4cb-k8s-certs\") pod \"kube-controller-manager-ci-4578.0.0-p-9807086b3c\" (UID: \"c710b6d3d992c9c1884dfc386dd6b4cb\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-9807086b3c" Jan 14 01:20:36.078925 kubelet[3617]: I0114 01:20:36.078767 3617 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c710b6d3d992c9c1884dfc386dd6b4cb-kubeconfig\") pod \"kube-controller-manager-ci-4578.0.0-p-9807086b3c\" (UID: \"c710b6d3d992c9c1884dfc386dd6b4cb\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-9807086b3c" Jan 14 01:20:36.247440 kubelet[3617]: I0114 01:20:36.247419 3617 kubelet_node_status.go:75] "Attempting to register node" node="ci-4578.0.0-p-9807086b3c" Jan 14 01:20:36.247695 kubelet[3617]: E0114 01:20:36.247675 3617 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.7:6443/api/v1/nodes\": dial tcp 10.200.4.7:6443: connect: connection refused" node="ci-4578.0.0-p-9807086b3c" Jan 14 01:20:36.334198 containerd[2507]: time="2026-01-14T01:20:36.334126306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4578.0.0-p-9807086b3c,Uid:f18f777686a51c9628d11da799ae90ff,Namespace:kube-system,Attempt:0,}" Jan 14 01:20:36.346140 containerd[2507]: time="2026-01-14T01:20:36.346104768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4578.0.0-p-9807086b3c,Uid:c710b6d3d992c9c1884dfc386dd6b4cb,Namespace:kube-system,Attempt:0,}" Jan 14 01:20:36.353340 containerd[2507]: time="2026-01-14T01:20:36.353318506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4578.0.0-p-9807086b3c,Uid:0e46da19186e4716a371ed10c9203539,Namespace:kube-system,Attempt:0,}" Jan 14 01:20:36.477753 kubelet[3617]: E0114 01:20:36.477730 3617 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4578.0.0-p-9807086b3c?timeout=10s\": dial tcp 10.200.4.7:6443: connect: connection refused" interval="800ms" Jan 14 01:20:36.649680 kubelet[3617]: I0114 01:20:36.649609 3617 kubelet_node_status.go:75] "Attempting to register node" node="ci-4578.0.0-p-9807086b3c" Jan 14 01:20:36.650081 kubelet[3617]: E0114 01:20:36.650045 3617 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.7:6443/api/v1/nodes\": dial tcp 10.200.4.7:6443: connect: connection refused" node="ci-4578.0.0-p-9807086b3c" Jan 14 01:20:36.716125 kubelet[3617]: E0114 01:20:36.716091 3617 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.4.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 14 01:20:36.889246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2439029041.mount: Deactivated successfully. Jan 14 01:20:36.929771 containerd[2507]: time="2026-01-14T01:20:36.929694671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:20:36.945612 containerd[2507]: time="2026-01-14T01:20:36.945532306Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 14 01:20:36.951989 containerd[2507]: time="2026-01-14T01:20:36.951964025Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:20:36.955426 containerd[2507]: time="2026-01-14T01:20:36.955399169Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:20:36.962258 containerd[2507]: time="2026-01-14T01:20:36.961787591Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 14 01:20:36.971199 containerd[2507]: time="2026-01-14T01:20:36.971153830Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:20:36.975398 containerd[2507]: time="2026-01-14T01:20:36.975361045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:20:36.975870 containerd[2507]: time="2026-01-14T01:20:36.975847676Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 630.019691ms" Jan 14 01:20:36.978238 containerd[2507]: time="2026-01-14T01:20:36.978205312Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 14 01:20:36.987053 containerd[2507]: time="2026-01-14T01:20:36.987022401Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 607.716588ms" Jan 14 01:20:36.990657 containerd[2507]: time="2026-01-14T01:20:36.990627644Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 614.329705ms" Jan 14 01:20:37.269792 kubelet[3617]: E0114 01:20:37.269765 3617 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.4.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 01:20:37.278200 kubelet[3617]: E0114 01:20:37.278181 3617 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4578.0.0-p-9807086b3c?timeout=10s\": dial tcp 10.200.4.7:6443: connect: connection refused" interval="1.6s" Jan 14 01:20:37.330804 kubelet[3617]: E0114 01:20:37.330781 3617 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.4.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4578.0.0-p-9807086b3c&limit=500&resourceVersion=0\": dial tcp 10.200.4.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 14 01:20:37.392539 containerd[2507]: time="2026-01-14T01:20:37.392479438Z" level=info msg="connecting to shim bc26bd53969cc52944eeb39cdde5365a748db4d05a44e773004792e401b5129e" address="unix:///run/containerd/s/c7fa30ebb897ce65559dafe8993101f7bb7ee7e336f8e2eb60c3c13654937b69" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:20:37.417340 containerd[2507]: time="2026-01-14T01:20:37.416897386Z" level=info msg="connecting to shim 315780d3d7d125fa1ca2b76810575b66fde63f4eeed58d36a51f807d47605a7e" address="unix:///run/containerd/s/c6f6d757186cb68dc886800f0cb1e754441d9d016745f7fb2ca44050b6471a91" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:20:37.422920 kubelet[3617]: E0114 01:20:37.422887 3617 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.4.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.7:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 14 01:20:37.424730 systemd[1]: Started cri-containerd-bc26bd53969cc52944eeb39cdde5365a748db4d05a44e773004792e401b5129e.scope - libcontainer container bc26bd53969cc52944eeb39cdde5365a748db4d05a44e773004792e401b5129e. Jan 14 01:20:37.436253 containerd[2507]: time="2026-01-14T01:20:37.436228402Z" level=info msg="connecting to shim d96a9532c48bd79e9faabd68500a896a3df059a04d26a7e7b64072ad2a860902" address="unix:///run/containerd/s/21e6fbe13ff3e49e5127670fe70047236e91b8b1785a543a3c35748cd2109dc4" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:20:37.447775 systemd[1]: Started cri-containerd-315780d3d7d125fa1ca2b76810575b66fde63f4eeed58d36a51f807d47605a7e.scope - libcontainer container 315780d3d7d125fa1ca2b76810575b66fde63f4eeed58d36a51f807d47605a7e. Jan 14 01:20:37.449000 audit: BPF prog-id=107 op=LOAD Jan 14 01:20:37.450000 audit: BPF prog-id=108 op=LOAD Jan 14 01:20:37.450000 audit[3677]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3664 pid=3677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263323662643533393639636335323934346565623339636464653533 Jan 14 01:20:37.450000 audit: BPF prog-id=108 op=UNLOAD Jan 14 01:20:37.450000 audit[3677]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3664 pid=3677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263323662643533393639636335323934346565623339636464653533 Jan 14 01:20:37.451000 audit: BPF prog-id=109 op=LOAD Jan 14 01:20:37.451000 audit[3677]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3664 pid=3677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.451000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263323662643533393639636335323934346565623339636464653533 Jan 14 01:20:37.452000 audit: BPF prog-id=110 op=LOAD Jan 14 01:20:37.452000 audit[3677]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3664 pid=3677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263323662643533393639636335323934346565623339636464653533 Jan 14 01:20:37.452000 audit: BPF prog-id=110 op=UNLOAD Jan 14 01:20:37.452000 audit[3677]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3664 pid=3677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263323662643533393639636335323934346565623339636464653533 Jan 14 01:20:37.453000 audit: BPF prog-id=109 op=UNLOAD Jan 14 01:20:37.453000 audit[3677]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3664 pid=3677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.453000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263323662643533393639636335323934346565623339636464653533 Jan 14 01:20:37.453000 audit: BPF prog-id=111 op=LOAD Jan 14 01:20:37.453000 audit[3677]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3664 pid=3677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.453000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263323662643533393639636335323934346565623339636464653533 Jan 14 01:20:37.454965 kubelet[3617]: I0114 01:20:37.454611 3617 kubelet_node_status.go:75] "Attempting to register node" node="ci-4578.0.0-p-9807086b3c" Jan 14 01:20:37.455540 kubelet[3617]: E0114 01:20:37.455139 3617 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.7:6443/api/v1/nodes\": dial tcp 10.200.4.7:6443: connect: connection refused" node="ci-4578.0.0-p-9807086b3c" Jan 14 01:20:37.462668 systemd[1]: Started cri-containerd-d96a9532c48bd79e9faabd68500a896a3df059a04d26a7e7b64072ad2a860902.scope - libcontainer container d96a9532c48bd79e9faabd68500a896a3df059a04d26a7e7b64072ad2a860902. Jan 14 01:20:37.464000 audit: BPF prog-id=112 op=LOAD Jan 14 01:20:37.464000 audit: BPF prog-id=113 op=LOAD Jan 14 01:20:37.464000 audit[3710]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3698 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331353738306433643764313235666131636132623736383130353735 Jan 14 01:20:37.464000 audit: BPF prog-id=113 op=UNLOAD Jan 14 01:20:37.464000 audit[3710]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3698 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331353738306433643764313235666131636132623736383130353735 Jan 14 01:20:37.464000 audit: BPF prog-id=114 op=LOAD Jan 14 01:20:37.464000 audit[3710]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3698 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331353738306433643764313235666131636132623736383130353735 Jan 14 01:20:37.464000 audit: BPF prog-id=115 op=LOAD Jan 14 01:20:37.464000 audit[3710]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3698 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331353738306433643764313235666131636132623736383130353735 Jan 14 01:20:37.464000 audit: BPF prog-id=115 op=UNLOAD Jan 14 01:20:37.464000 audit[3710]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3698 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331353738306433643764313235666131636132623736383130353735 Jan 14 01:20:37.464000 audit: BPF prog-id=114 op=UNLOAD Jan 14 01:20:37.464000 audit[3710]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3698 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331353738306433643764313235666131636132623736383130353735 Jan 14 01:20:37.465000 audit: BPF prog-id=116 op=LOAD Jan 14 01:20:37.465000 audit[3710]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3698 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.465000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331353738306433643764313235666131636132623736383130353735 Jan 14 01:20:37.483000 audit: BPF prog-id=117 op=LOAD Jan 14 01:20:37.483000 audit: BPF prog-id=118 op=LOAD Jan 14 01:20:37.483000 audit[3747]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3726 pid=3747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439366139353332633438626437396539666161626436383530306138 Jan 14 01:20:37.483000 audit: BPF prog-id=118 op=UNLOAD Jan 14 01:20:37.483000 audit[3747]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3726 pid=3747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439366139353332633438626437396539666161626436383530306138 Jan 14 01:20:37.483000 audit: BPF prog-id=119 op=LOAD Jan 14 01:20:37.483000 audit[3747]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3726 pid=3747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439366139353332633438626437396539666161626436383530306138 Jan 14 01:20:37.483000 audit: BPF prog-id=120 op=LOAD Jan 14 01:20:37.483000 audit[3747]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3726 pid=3747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439366139353332633438626437396539666161626436383530306138 Jan 14 01:20:37.484000 audit: BPF prog-id=120 op=UNLOAD Jan 14 01:20:37.484000 audit[3747]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3726 pid=3747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439366139353332633438626437396539666161626436383530306138 Jan 14 01:20:37.484000 audit: BPF prog-id=119 op=UNLOAD Jan 14 01:20:37.484000 audit[3747]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3726 pid=3747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439366139353332633438626437396539666161626436383530306138 Jan 14 01:20:37.484000 audit: BPF prog-id=121 op=LOAD Jan 14 01:20:37.484000 audit[3747]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3726 pid=3747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439366139353332633438626437396539666161626436383530306138 Jan 14 01:20:37.514285 containerd[2507]: time="2026-01-14T01:20:37.513951238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4578.0.0-p-9807086b3c,Uid:f18f777686a51c9628d11da799ae90ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc26bd53969cc52944eeb39cdde5365a748db4d05a44e773004792e401b5129e\"" Jan 14 01:20:37.526298 containerd[2507]: time="2026-01-14T01:20:37.526233447Z" level=info msg="CreateContainer within sandbox \"bc26bd53969cc52944eeb39cdde5365a748db4d05a44e773004792e401b5129e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 14 01:20:37.528142 containerd[2507]: time="2026-01-14T01:20:37.528119658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4578.0.0-p-9807086b3c,Uid:0e46da19186e4716a371ed10c9203539,Namespace:kube-system,Attempt:0,} returns sandbox id \"315780d3d7d125fa1ca2b76810575b66fde63f4eeed58d36a51f807d47605a7e\"" Jan 14 01:20:37.534146 containerd[2507]: time="2026-01-14T01:20:37.534101046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4578.0.0-p-9807086b3c,Uid:c710b6d3d992c9c1884dfc386dd6b4cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"d96a9532c48bd79e9faabd68500a896a3df059a04d26a7e7b64072ad2a860902\"" Jan 14 01:20:37.537116 containerd[2507]: time="2026-01-14T01:20:37.536903370Z" level=info msg="CreateContainer within sandbox \"315780d3d7d125fa1ca2b76810575b66fde63f4eeed58d36a51f807d47605a7e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 14 01:20:37.540740 containerd[2507]: time="2026-01-14T01:20:37.540713796Z" level=info msg="CreateContainer within sandbox \"d96a9532c48bd79e9faabd68500a896a3df059a04d26a7e7b64072ad2a860902\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 14 01:20:37.549326 containerd[2507]: time="2026-01-14T01:20:37.549297514Z" level=info msg="Container 6a9aead9fc6153aaad44f0fc2c5d32beda64e01ce699c1427c6c6acf3c4483db: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:20:37.576749 containerd[2507]: time="2026-01-14T01:20:37.576719977Z" level=info msg="Container 890584c06829faacd64393203b6d11593af7859f7b696333add533de5d55cbf7: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:20:37.591572 containerd[2507]: time="2026-01-14T01:20:37.591549089Z" level=info msg="CreateContainer within sandbox \"bc26bd53969cc52944eeb39cdde5365a748db4d05a44e773004792e401b5129e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6a9aead9fc6153aaad44f0fc2c5d32beda64e01ce699c1427c6c6acf3c4483db\"" Jan 14 01:20:37.591992 containerd[2507]: time="2026-01-14T01:20:37.591972396Z" level=info msg="StartContainer for \"6a9aead9fc6153aaad44f0fc2c5d32beda64e01ce699c1427c6c6acf3c4483db\"" Jan 14 01:20:37.592671 containerd[2507]: time="2026-01-14T01:20:37.592643761Z" level=info msg="connecting to shim 6a9aead9fc6153aaad44f0fc2c5d32beda64e01ce699c1427c6c6acf3c4483db" address="unix:///run/containerd/s/c7fa30ebb897ce65559dafe8993101f7bb7ee7e336f8e2eb60c3c13654937b69" protocol=ttrpc version=3 Jan 14 01:20:37.602742 containerd[2507]: time="2026-01-14T01:20:37.602641192Z" level=info msg="Container e1c2c1cc460c29a1baf49d1df06c9df98edb89c567f87cce973443cf72008287: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:20:37.604785 containerd[2507]: time="2026-01-14T01:20:37.604763383Z" level=info msg="CreateContainer within sandbox \"315780d3d7d125fa1ca2b76810575b66fde63f4eeed58d36a51f807d47605a7e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"890584c06829faacd64393203b6d11593af7859f7b696333add533de5d55cbf7\"" Jan 14 01:20:37.605178 containerd[2507]: time="2026-01-14T01:20:37.605159980Z" level=info msg="StartContainer for \"890584c06829faacd64393203b6d11593af7859f7b696333add533de5d55cbf7\"" Jan 14 01:20:37.606236 containerd[2507]: time="2026-01-14T01:20:37.606210506Z" level=info msg="connecting to shim 890584c06829faacd64393203b6d11593af7859f7b696333add533de5d55cbf7" address="unix:///run/containerd/s/c6f6d757186cb68dc886800f0cb1e754441d9d016745f7fb2ca44050b6471a91" protocol=ttrpc version=3 Jan 14 01:20:37.606786 systemd[1]: Started cri-containerd-6a9aead9fc6153aaad44f0fc2c5d32beda64e01ce699c1427c6c6acf3c4483db.scope - libcontainer container 6a9aead9fc6153aaad44f0fc2c5d32beda64e01ce699c1427c6c6acf3c4483db. Jan 14 01:20:37.621664 containerd[2507]: time="2026-01-14T01:20:37.621637979Z" level=info msg="CreateContainer within sandbox \"d96a9532c48bd79e9faabd68500a896a3df059a04d26a7e7b64072ad2a860902\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e1c2c1cc460c29a1baf49d1df06c9df98edb89c567f87cce973443cf72008287\"" Jan 14 01:20:37.622738 containerd[2507]: time="2026-01-14T01:20:37.621970638Z" level=info msg="StartContainer for \"e1c2c1cc460c29a1baf49d1df06c9df98edb89c567f87cce973443cf72008287\"" Jan 14 01:20:37.622817 containerd[2507]: time="2026-01-14T01:20:37.622733460Z" level=info msg="connecting to shim e1c2c1cc460c29a1baf49d1df06c9df98edb89c567f87cce973443cf72008287" address="unix:///run/containerd/s/21e6fbe13ff3e49e5127670fe70047236e91b8b1785a543a3c35748cd2109dc4" protocol=ttrpc version=3 Jan 14 01:20:37.624691 systemd[1]: Started cri-containerd-890584c06829faacd64393203b6d11593af7859f7b696333add533de5d55cbf7.scope - libcontainer container 890584c06829faacd64393203b6d11593af7859f7b696333add533de5d55cbf7. Jan 14 01:20:37.628000 audit: BPF prog-id=122 op=LOAD Jan 14 01:20:37.628000 audit: BPF prog-id=123 op=LOAD Jan 14 01:20:37.628000 audit[3799]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3664 pid=3799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661396165616439666336313533616161643434663066633263356433 Jan 14 01:20:37.628000 audit: BPF prog-id=123 op=UNLOAD Jan 14 01:20:37.628000 audit[3799]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3664 pid=3799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661396165616439666336313533616161643434663066633263356433 Jan 14 01:20:37.628000 audit: BPF prog-id=124 op=LOAD Jan 14 01:20:37.628000 audit[3799]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3664 pid=3799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661396165616439666336313533616161643434663066633263356433 Jan 14 01:20:37.628000 audit: BPF prog-id=125 op=LOAD Jan 14 01:20:37.628000 audit[3799]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3664 pid=3799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661396165616439666336313533616161643434663066633263356433 Jan 14 01:20:37.628000 audit: BPF prog-id=125 op=UNLOAD Jan 14 01:20:37.628000 audit[3799]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3664 pid=3799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661396165616439666336313533616161643434663066633263356433 Jan 14 01:20:37.628000 audit: BPF prog-id=124 op=UNLOAD Jan 14 01:20:37.628000 audit[3799]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3664 pid=3799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661396165616439666336313533616161643434663066633263356433 Jan 14 01:20:37.628000 audit: BPF prog-id=126 op=LOAD Jan 14 01:20:37.628000 audit[3799]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3664 pid=3799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661396165616439666336313533616161643434663066633263356433 Jan 14 01:20:37.647641 systemd[1]: Started cri-containerd-e1c2c1cc460c29a1baf49d1df06c9df98edb89c567f87cce973443cf72008287.scope - libcontainer container e1c2c1cc460c29a1baf49d1df06c9df98edb89c567f87cce973443cf72008287. Jan 14 01:20:37.649000 audit: BPF prog-id=127 op=LOAD Jan 14 01:20:37.650000 audit: BPF prog-id=128 op=LOAD Jan 14 01:20:37.650000 audit[3812]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3698 pid=3812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839303538346330363832396661616364363433393332303362366431 Jan 14 01:20:37.650000 audit: BPF prog-id=128 op=UNLOAD Jan 14 01:20:37.650000 audit[3812]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3698 pid=3812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839303538346330363832396661616364363433393332303362366431 Jan 14 01:20:37.650000 audit: BPF prog-id=129 op=LOAD Jan 14 01:20:37.650000 audit[3812]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3698 pid=3812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839303538346330363832396661616364363433393332303362366431 Jan 14 01:20:37.650000 audit: BPF prog-id=130 op=LOAD Jan 14 01:20:37.650000 audit[3812]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3698 pid=3812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839303538346330363832396661616364363433393332303362366431 Jan 14 01:20:37.650000 audit: BPF prog-id=130 op=UNLOAD Jan 14 01:20:37.650000 audit[3812]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3698 pid=3812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839303538346330363832396661616364363433393332303362366431 Jan 14 01:20:37.650000 audit: BPF prog-id=129 op=UNLOAD Jan 14 01:20:37.650000 audit[3812]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3698 pid=3812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.650000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839303538346330363832396661616364363433393332303362366431 Jan 14 01:20:37.651000 audit: BPF prog-id=131 op=LOAD Jan 14 01:20:37.651000 audit[3812]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3698 pid=3812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.651000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839303538346330363832396661616364363433393332303362366431 Jan 14 01:20:37.678000 audit: BPF prog-id=132 op=LOAD Jan 14 01:20:37.680000 audit: BPF prog-id=133 op=LOAD Jan 14 01:20:37.680000 audit[3830]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3726 pid=3830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.680000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531633263316363343630633239613162616634396431646630366339 Jan 14 01:20:37.680000 audit: BPF prog-id=133 op=UNLOAD Jan 14 01:20:37.680000 audit[3830]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3726 pid=3830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.680000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531633263316363343630633239613162616634396431646630366339 Jan 14 01:20:37.681000 audit: BPF prog-id=134 op=LOAD Jan 14 01:20:37.681000 audit[3830]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3726 pid=3830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531633263316363343630633239613162616634396431646630366339 Jan 14 01:20:37.681000 audit: BPF prog-id=135 op=LOAD Jan 14 01:20:37.681000 audit[3830]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3726 pid=3830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531633263316363343630633239613162616634396431646630366339 Jan 14 01:20:37.681000 audit: BPF prog-id=135 op=UNLOAD Jan 14 01:20:37.681000 audit[3830]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3726 pid=3830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531633263316363343630633239613162616634396431646630366339 Jan 14 01:20:37.681000 audit: BPF prog-id=134 op=UNLOAD Jan 14 01:20:37.681000 audit[3830]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3726 pid=3830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531633263316363343630633239613162616634396431646630366339 Jan 14 01:20:37.682000 audit: BPF prog-id=136 op=LOAD Jan 14 01:20:37.682000 audit[3830]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3726 pid=3830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:37.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531633263316363343630633239613162616634396431646630366339 Jan 14 01:20:37.689005 containerd[2507]: time="2026-01-14T01:20:37.688983060Z" level=info msg="StartContainer for \"6a9aead9fc6153aaad44f0fc2c5d32beda64e01ce699c1427c6c6acf3c4483db\" returns successfully" Jan 14 01:20:37.708677 containerd[2507]: time="2026-01-14T01:20:37.708652961Z" level=info msg="StartContainer for \"890584c06829faacd64393203b6d11593af7859f7b696333add533de5d55cbf7\" returns successfully" Jan 14 01:20:37.813354 containerd[2507]: time="2026-01-14T01:20:37.812412737Z" level=info msg="StartContainer for \"e1c2c1cc460c29a1baf49d1df06c9df98edb89c567f87cce973443cf72008287\" returns successfully" Jan 14 01:20:37.928948 kubelet[3617]: E0114 01:20:37.928930 3617 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-9807086b3c\" not found" node="ci-4578.0.0-p-9807086b3c" Jan 14 01:20:37.932073 kubelet[3617]: E0114 01:20:37.932008 3617 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-9807086b3c\" not found" node="ci-4578.0.0-p-9807086b3c" Jan 14 01:20:37.935089 kubelet[3617]: E0114 01:20:37.935070 3617 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-9807086b3c\" not found" node="ci-4578.0.0-p-9807086b3c" Jan 14 01:20:38.939278 kubelet[3617]: E0114 01:20:38.939247 3617 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-9807086b3c\" not found" node="ci-4578.0.0-p-9807086b3c" Jan 14 01:20:38.939667 kubelet[3617]: E0114 01:20:38.939649 3617 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578.0.0-p-9807086b3c\" not found" node="ci-4578.0.0-p-9807086b3c" Jan 14 01:20:39.058508 kubelet[3617]: I0114 01:20:39.058332 3617 kubelet_node_status.go:75] "Attempting to register node" node="ci-4578.0.0-p-9807086b3c" Jan 14 01:20:40.066208 kubelet[3617]: E0114 01:20:40.066160 3617 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4578.0.0-p-9807086b3c\" not found" node="ci-4578.0.0-p-9807086b3c" Jan 14 01:20:40.131009 kubelet[3617]: I0114 01:20:40.130963 3617 kubelet_node_status.go:78] "Successfully registered node" node="ci-4578.0.0-p-9807086b3c" Jan 14 01:20:40.171851 kubelet[3617]: I0114 01:20:40.171821 3617 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4578.0.0-p-9807086b3c" Jan 14 01:20:40.180142 kubelet[3617]: E0114 01:20:40.180110 3617 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4578.0.0-p-9807086b3c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4578.0.0-p-9807086b3c" Jan 14 01:20:40.180374 kubelet[3617]: I0114 01:20:40.180268 3617 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4578.0.0-p-9807086b3c" Jan 14 01:20:40.182157 kubelet[3617]: E0114 01:20:40.182134 3617 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4578.0.0-p-9807086b3c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4578.0.0-p-9807086b3c" Jan 14 01:20:40.182250 kubelet[3617]: I0114 01:20:40.182234 3617 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4578.0.0-p-9807086b3c" Jan 14 01:20:40.183295 kubelet[3617]: E0114 01:20:40.183270 3617 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4578.0.0-p-9807086b3c\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4578.0.0-p-9807086b3c" Jan 14 01:20:40.859808 kubelet[3617]: I0114 01:20:40.859778 3617 apiserver.go:52] "Watching apiserver" Jan 14 01:20:40.878637 kubelet[3617]: I0114 01:20:40.878606 3617 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 14 01:20:42.009090 kubelet[3617]: I0114 01:20:42.009065 3617 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4578.0.0-p-9807086b3c" Jan 14 01:20:42.020144 kubelet[3617]: I0114 01:20:42.020107 3617 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 14 01:20:42.457234 systemd[1]: Reload requested from client PID 3902 ('systemctl') (unit session-10.scope)... Jan 14 01:20:42.457247 systemd[1]: Reloading... Jan 14 01:20:42.527530 zram_generator::config[3952]: No configuration found. Jan 14 01:20:42.715538 systemd[1]: Reloading finished in 258 ms. Jan 14 01:20:42.739402 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:20:42.750268 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 01:20:42.750511 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:20:42.759299 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 14 01:20:42.759361 kernel: audit: type=1131 audit(1768353642.750:429): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:42.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:42.750563 systemd[1]: kubelet.service: Consumed 631ms CPU time, 124.7M memory peak. Jan 14 01:20:42.753697 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:20:42.752000 audit: BPF prog-id=137 op=LOAD Jan 14 01:20:42.752000 audit: BPF prog-id=138 op=LOAD Jan 14 01:20:42.764106 kernel: audit: type=1334 audit(1768353642.752:430): prog-id=137 op=LOAD Jan 14 01:20:42.764198 kernel: audit: type=1334 audit(1768353642.752:431): prog-id=138 op=LOAD Jan 14 01:20:42.766177 kernel: audit: type=1334 audit(1768353642.752:432): prog-id=95 op=UNLOAD Jan 14 01:20:42.752000 audit: BPF prog-id=95 op=UNLOAD Jan 14 01:20:42.768552 kernel: audit: type=1334 audit(1768353642.752:433): prog-id=96 op=UNLOAD Jan 14 01:20:42.752000 audit: BPF prog-id=96 op=UNLOAD Jan 14 01:20:42.770761 kernel: audit: type=1334 audit(1768353642.752:434): prog-id=139 op=LOAD Jan 14 01:20:42.752000 audit: BPF prog-id=139 op=LOAD Jan 14 01:20:42.772830 kernel: audit: type=1334 audit(1768353642.752:435): prog-id=88 op=UNLOAD Jan 14 01:20:42.752000 audit: BPF prog-id=88 op=UNLOAD Jan 14 01:20:42.774786 kernel: audit: type=1334 audit(1768353642.752:436): prog-id=140 op=LOAD Jan 14 01:20:42.752000 audit: BPF prog-id=140 op=LOAD Jan 14 01:20:42.776731 kernel: audit: type=1334 audit(1768353642.752:437): prog-id=141 op=LOAD Jan 14 01:20:42.752000 audit: BPF prog-id=141 op=LOAD Jan 14 01:20:42.752000 audit: BPF prog-id=89 op=UNLOAD Jan 14 01:20:42.778213 kernel: audit: type=1334 audit(1768353642.752:438): prog-id=89 op=UNLOAD Jan 14 01:20:42.752000 audit: BPF prog-id=90 op=UNLOAD Jan 14 01:20:42.752000 audit: BPF prog-id=142 op=LOAD Jan 14 01:20:42.752000 audit: BPF prog-id=92 op=UNLOAD Jan 14 01:20:42.752000 audit: BPF prog-id=143 op=LOAD Jan 14 01:20:42.752000 audit: BPF prog-id=144 op=LOAD Jan 14 01:20:42.752000 audit: BPF prog-id=93 op=UNLOAD Jan 14 01:20:42.752000 audit: BPF prog-id=94 op=UNLOAD Jan 14 01:20:42.757000 audit: BPF prog-id=145 op=LOAD Jan 14 01:20:42.757000 audit: BPF prog-id=97 op=UNLOAD Jan 14 01:20:42.757000 audit: BPF prog-id=146 op=LOAD Jan 14 01:20:42.757000 audit: BPF prog-id=147 op=LOAD Jan 14 01:20:42.757000 audit: BPF prog-id=98 op=UNLOAD Jan 14 01:20:42.757000 audit: BPF prog-id=99 op=UNLOAD Jan 14 01:20:42.759000 audit: BPF prog-id=148 op=LOAD Jan 14 01:20:42.765000 audit: BPF prog-id=106 op=UNLOAD Jan 14 01:20:42.766000 audit: BPF prog-id=149 op=LOAD Jan 14 01:20:42.766000 audit: BPF prog-id=103 op=UNLOAD Jan 14 01:20:42.766000 audit: BPF prog-id=150 op=LOAD Jan 14 01:20:42.766000 audit: BPF prog-id=151 op=LOAD Jan 14 01:20:42.766000 audit: BPF prog-id=104 op=UNLOAD Jan 14 01:20:42.766000 audit: BPF prog-id=105 op=UNLOAD Jan 14 01:20:42.767000 audit: BPF prog-id=152 op=LOAD Jan 14 01:20:42.767000 audit: BPF prog-id=87 op=UNLOAD Jan 14 01:20:42.767000 audit: BPF prog-id=153 op=LOAD Jan 14 01:20:42.767000 audit: BPF prog-id=100 op=UNLOAD Jan 14 01:20:42.767000 audit: BPF prog-id=154 op=LOAD Jan 14 01:20:42.768000 audit: BPF prog-id=155 op=LOAD Jan 14 01:20:42.768000 audit: BPF prog-id=101 op=UNLOAD Jan 14 01:20:42.768000 audit: BPF prog-id=102 op=UNLOAD Jan 14 01:20:42.769000 audit: BPF prog-id=156 op=LOAD Jan 14 01:20:42.769000 audit: BPF prog-id=91 op=UNLOAD Jan 14 01:20:44.634566 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:20:44.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:44.641745 (kubelet)[4019]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 01:20:44.679505 kubelet[4019]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 01:20:44.679505 kubelet[4019]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:20:44.679505 kubelet[4019]: I0114 01:20:44.678683 4019 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 01:20:44.684133 kubelet[4019]: I0114 01:20:44.684104 4019 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 14 01:20:44.684133 kubelet[4019]: I0114 01:20:44.684122 4019 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 01:20:44.684247 kubelet[4019]: I0114 01:20:44.684146 4019 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 14 01:20:44.684247 kubelet[4019]: I0114 01:20:44.684152 4019 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 01:20:44.684344 kubelet[4019]: I0114 01:20:44.684331 4019 server.go:956] "Client rotation is on, will bootstrap in background" Jan 14 01:20:44.685191 kubelet[4019]: I0114 01:20:44.685176 4019 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 14 01:20:44.687011 kubelet[4019]: I0114 01:20:44.686979 4019 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 01:20:44.691688 kubelet[4019]: I0114 01:20:44.691654 4019 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 01:20:44.693650 kubelet[4019]: I0114 01:20:44.693633 4019 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 14 01:20:44.694361 kubelet[4019]: I0114 01:20:44.694315 4019 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 01:20:44.694665 kubelet[4019]: I0114 01:20:44.694357 4019 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4578.0.0-p-9807086b3c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 01:20:44.694665 kubelet[4019]: I0114 01:20:44.694653 4019 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 01:20:44.694665 kubelet[4019]: I0114 01:20:44.694663 4019 container_manager_linux.go:306] "Creating device plugin manager" Jan 14 01:20:44.694820 kubelet[4019]: I0114 01:20:44.694685 4019 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 14 01:20:44.695877 kubelet[4019]: I0114 01:20:44.695836 4019 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:20:44.695981 kubelet[4019]: I0114 01:20:44.695971 4019 kubelet.go:475] "Attempting to sync node with API server" Jan 14 01:20:44.696020 kubelet[4019]: I0114 01:20:44.695987 4019 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 01:20:44.696020 kubelet[4019]: I0114 01:20:44.696008 4019 kubelet.go:387] "Adding apiserver pod source" Jan 14 01:20:44.696065 kubelet[4019]: I0114 01:20:44.696026 4019 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 01:20:44.700411 kubelet[4019]: I0114 01:20:44.700391 4019 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 01:20:44.708016 kubelet[4019]: I0114 01:20:44.707939 4019 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 14 01:20:44.708016 kubelet[4019]: I0114 01:20:44.707968 4019 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 14 01:20:44.712927 kubelet[4019]: I0114 01:20:44.712909 4019 server.go:1262] "Started kubelet" Jan 14 01:20:44.714240 kubelet[4019]: I0114 01:20:44.714175 4019 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 01:20:44.714835 kubelet[4019]: I0114 01:20:44.714812 4019 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 01:20:44.717466 kubelet[4019]: I0114 01:20:44.717420 4019 server.go:310] "Adding debug handlers to kubelet server" Jan 14 01:20:44.717863 kubelet[4019]: I0114 01:20:44.717804 4019 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 01:20:44.717863 kubelet[4019]: I0114 01:20:44.717836 4019 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 14 01:20:44.719252 kubelet[4019]: I0114 01:20:44.719227 4019 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 01:20:44.719441 kubelet[4019]: I0114 01:20:44.719429 4019 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 01:20:44.720667 kubelet[4019]: I0114 01:20:44.720653 4019 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 14 01:20:44.721525 kubelet[4019]: I0114 01:20:44.721463 4019 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 14 01:20:44.721643 kubelet[4019]: I0114 01:20:44.721596 4019 reconciler.go:29] "Reconciler: start to sync state" Jan 14 01:20:44.723710 kubelet[4019]: I0114 01:20:44.723692 4019 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 01:20:44.725432 kubelet[4019]: E0114 01:20:44.725408 4019 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 01:20:44.726452 kubelet[4019]: I0114 01:20:44.725904 4019 factory.go:223] Registration of the containerd container factory successfully Jan 14 01:20:44.726452 kubelet[4019]: I0114 01:20:44.725914 4019 factory.go:223] Registration of the systemd container factory successfully Jan 14 01:20:44.735010 kubelet[4019]: I0114 01:20:44.734973 4019 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 14 01:20:44.737627 kubelet[4019]: I0114 01:20:44.737614 4019 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 14 01:20:44.737936 kubelet[4019]: I0114 01:20:44.737922 4019 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 14 01:20:44.738006 kubelet[4019]: I0114 01:20:44.737948 4019 kubelet.go:2427] "Starting kubelet main sync loop" Jan 14 01:20:44.738049 kubelet[4019]: E0114 01:20:44.737996 4019 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 01:20:44.761680 kubelet[4019]: I0114 01:20:44.761666 4019 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 01:20:44.761770 kubelet[4019]: I0114 01:20:44.761761 4019 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 01:20:44.762115 kubelet[4019]: I0114 01:20:44.762099 4019 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:20:44.762253 kubelet[4019]: I0114 01:20:44.762234 4019 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 14 01:20:44.762298 kubelet[4019]: I0114 01:20:44.762255 4019 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 14 01:20:44.762298 kubelet[4019]: I0114 01:20:44.762272 4019 policy_none.go:49] "None policy: Start" Jan 14 01:20:44.762298 kubelet[4019]: I0114 01:20:44.762282 4019 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 14 01:20:44.762298 kubelet[4019]: I0114 01:20:44.762292 4019 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 14 01:20:44.762386 kubelet[4019]: I0114 01:20:44.762373 4019 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 14 01:20:44.762386 kubelet[4019]: I0114 01:20:44.762379 4019 policy_none.go:47] "Start" Jan 14 01:20:44.765351 kubelet[4019]: E0114 01:20:44.765325 4019 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 14 01:20:44.765472 kubelet[4019]: I0114 01:20:44.765453 4019 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 01:20:44.765528 kubelet[4019]: I0114 01:20:44.765478 4019 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 01:20:44.765847 kubelet[4019]: I0114 01:20:44.765829 4019 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 01:20:44.769116 kubelet[4019]: E0114 01:20:44.767973 4019 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 01:20:44.838910 kubelet[4019]: I0114 01:20:44.838877 4019 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4578.0.0-p-9807086b3c" Jan 14 01:20:44.839086 kubelet[4019]: I0114 01:20:44.839075 4019 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4578.0.0-p-9807086b3c" Jan 14 01:20:44.839156 kubelet[4019]: I0114 01:20:44.838968 4019 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4578.0.0-p-9807086b3c" Jan 14 01:20:44.872848 kubelet[4019]: I0114 01:20:44.872832 4019 kubelet_node_status.go:75] "Attempting to register node" node="ci-4578.0.0-p-9807086b3c" Jan 14 01:20:44.883664 kubelet[4019]: I0114 01:20:44.882819 4019 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 14 01:20:44.884630 kubelet[4019]: I0114 01:20:44.884610 4019 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 14 01:20:44.885093 kubelet[4019]: I0114 01:20:44.885005 4019 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 14 01:20:44.885093 kubelet[4019]: E0114 01:20:44.885064 4019 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4578.0.0-p-9807086b3c\" already exists" pod="kube-system/kube-controller-manager-ci-4578.0.0-p-9807086b3c" Jan 14 01:20:44.931214 kubelet[4019]: I0114 01:20:44.931022 4019 kubelet_node_status.go:124] "Node was previously registered" node="ci-4578.0.0-p-9807086b3c" Jan 14 01:20:44.931214 kubelet[4019]: I0114 01:20:44.931068 4019 kubelet_node_status.go:78] "Successfully registered node" node="ci-4578.0.0-p-9807086b3c" Jan 14 01:20:45.022911 kubelet[4019]: I0114 01:20:45.022893 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f18f777686a51c9628d11da799ae90ff-k8s-certs\") pod \"kube-apiserver-ci-4578.0.0-p-9807086b3c\" (UID: \"f18f777686a51c9628d11da799ae90ff\") " pod="kube-system/kube-apiserver-ci-4578.0.0-p-9807086b3c" Jan 14 01:20:45.023021 kubelet[4019]: I0114 01:20:45.023007 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f18f777686a51c9628d11da799ae90ff-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4578.0.0-p-9807086b3c\" (UID: \"f18f777686a51c9628d11da799ae90ff\") " pod="kube-system/kube-apiserver-ci-4578.0.0-p-9807086b3c" Jan 14 01:20:45.023118 kubelet[4019]: I0114 01:20:45.023108 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c710b6d3d992c9c1884dfc386dd6b4cb-ca-certs\") pod \"kube-controller-manager-ci-4578.0.0-p-9807086b3c\" (UID: \"c710b6d3d992c9c1884dfc386dd6b4cb\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-9807086b3c" Jan 14 01:20:45.023199 kubelet[4019]: I0114 01:20:45.023191 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c710b6d3d992c9c1884dfc386dd6b4cb-k8s-certs\") pod \"kube-controller-manager-ci-4578.0.0-p-9807086b3c\" (UID: \"c710b6d3d992c9c1884dfc386dd6b4cb\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-9807086b3c" Jan 14 01:20:45.023280 kubelet[4019]: I0114 01:20:45.023268 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0e46da19186e4716a371ed10c9203539-kubeconfig\") pod \"kube-scheduler-ci-4578.0.0-p-9807086b3c\" (UID: \"0e46da19186e4716a371ed10c9203539\") " pod="kube-system/kube-scheduler-ci-4578.0.0-p-9807086b3c" Jan 14 01:20:45.023364 kubelet[4019]: I0114 01:20:45.023355 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c710b6d3d992c9c1884dfc386dd6b4cb-flexvolume-dir\") pod \"kube-controller-manager-ci-4578.0.0-p-9807086b3c\" (UID: \"c710b6d3d992c9c1884dfc386dd6b4cb\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-9807086b3c" Jan 14 01:20:45.023435 kubelet[4019]: I0114 01:20:45.023428 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c710b6d3d992c9c1884dfc386dd6b4cb-kubeconfig\") pod \"kube-controller-manager-ci-4578.0.0-p-9807086b3c\" (UID: \"c710b6d3d992c9c1884dfc386dd6b4cb\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-9807086b3c" Jan 14 01:20:45.023546 kubelet[4019]: I0114 01:20:45.023498 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c710b6d3d992c9c1884dfc386dd6b4cb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4578.0.0-p-9807086b3c\" (UID: \"c710b6d3d992c9c1884dfc386dd6b4cb\") " pod="kube-system/kube-controller-manager-ci-4578.0.0-p-9807086b3c" Jan 14 01:20:45.023546 kubelet[4019]: I0114 01:20:45.023523 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f18f777686a51c9628d11da799ae90ff-ca-certs\") pod \"kube-apiserver-ci-4578.0.0-p-9807086b3c\" (UID: \"f18f777686a51c9628d11da799ae90ff\") " pod="kube-system/kube-apiserver-ci-4578.0.0-p-9807086b3c" Jan 14 01:20:45.699100 kubelet[4019]: I0114 01:20:45.699069 4019 apiserver.go:52] "Watching apiserver" Jan 14 01:20:45.722362 kubelet[4019]: I0114 01:20:45.722334 4019 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 14 01:20:45.755912 kubelet[4019]: I0114 01:20:45.755877 4019 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4578.0.0-p-9807086b3c" Jan 14 01:20:45.757515 kubelet[4019]: I0114 01:20:45.756276 4019 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4578.0.0-p-9807086b3c" Jan 14 01:20:45.770394 kubelet[4019]: I0114 01:20:45.770347 4019 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 14 01:20:45.770629 kubelet[4019]: E0114 01:20:45.770557 4019 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4578.0.0-p-9807086b3c\" already exists" pod="kube-system/kube-controller-manager-ci-4578.0.0-p-9807086b3c" Jan 14 01:20:45.770860 kubelet[4019]: I0114 01:20:45.770845 4019 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 14 01:20:45.770946 kubelet[4019]: E0114 01:20:45.770937 4019 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4578.0.0-p-9807086b3c\" already exists" pod="kube-system/kube-apiserver-ci-4578.0.0-p-9807086b3c" Jan 14 01:20:45.782251 kubelet[4019]: I0114 01:20:45.782187 4019 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4578.0.0-p-9807086b3c" podStartSLOduration=1.7821769870000002 podStartE2EDuration="1.782176987s" podCreationTimestamp="2026-01-14 01:20:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:20:45.782002738 +0000 UTC m=+1.137043116" watchObservedRunningTime="2026-01-14 01:20:45.782176987 +0000 UTC m=+1.137217366" Jan 14 01:20:45.790975 kubelet[4019]: I0114 01:20:45.790935 4019 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4578.0.0-p-9807086b3c" podStartSLOduration=3.790922832 podStartE2EDuration="3.790922832s" podCreationTimestamp="2026-01-14 01:20:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:20:45.790452511 +0000 UTC m=+1.145492885" watchObservedRunningTime="2026-01-14 01:20:45.790922832 +0000 UTC m=+1.145963205" Jan 14 01:20:45.878137 kubelet[4019]: I0114 01:20:45.877943 4019 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4578.0.0-p-9807086b3c" podStartSLOduration=1.8774805639999999 podStartE2EDuration="1.877480564s" podCreationTimestamp="2026-01-14 01:20:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:20:45.876715489 +0000 UTC m=+1.231755867" watchObservedRunningTime="2026-01-14 01:20:45.877480564 +0000 UTC m=+1.232520934" Jan 14 01:20:48.156662 kubelet[4019]: I0114 01:20:48.156637 4019 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 14 01:20:48.157327 containerd[2507]: time="2026-01-14T01:20:48.157296453Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 01:20:48.157643 kubelet[4019]: I0114 01:20:48.157461 4019 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 14 01:20:49.398190 systemd[1]: Created slice kubepods-besteffort-podd18e4311_1df9_4d0b_b7f7_a85d6fd0f535.slice - libcontainer container kubepods-besteffort-podd18e4311_1df9_4d0b_b7f7_a85d6fd0f535.slice. Jan 14 01:20:49.449679 kubelet[4019]: I0114 01:20:49.449643 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d18e4311-1df9-4d0b-b7f7-a85d6fd0f535-lib-modules\") pod \"kube-proxy-jpcvz\" (UID: \"d18e4311-1df9-4d0b-b7f7-a85d6fd0f535\") " pod="kube-system/kube-proxy-jpcvz" Jan 14 01:20:49.449679 kubelet[4019]: I0114 01:20:49.449672 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d18e4311-1df9-4d0b-b7f7-a85d6fd0f535-kube-proxy\") pod \"kube-proxy-jpcvz\" (UID: \"d18e4311-1df9-4d0b-b7f7-a85d6fd0f535\") " pod="kube-system/kube-proxy-jpcvz" Jan 14 01:20:49.450000 kubelet[4019]: I0114 01:20:49.449690 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bwlp\" (UniqueName: \"kubernetes.io/projected/d18e4311-1df9-4d0b-b7f7-a85d6fd0f535-kube-api-access-7bwlp\") pod \"kube-proxy-jpcvz\" (UID: \"d18e4311-1df9-4d0b-b7f7-a85d6fd0f535\") " pod="kube-system/kube-proxy-jpcvz" Jan 14 01:20:49.450000 kubelet[4019]: I0114 01:20:49.449708 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d18e4311-1df9-4d0b-b7f7-a85d6fd0f535-xtables-lock\") pod \"kube-proxy-jpcvz\" (UID: \"d18e4311-1df9-4d0b-b7f7-a85d6fd0f535\") " pod="kube-system/kube-proxy-jpcvz" Jan 14 01:20:49.561457 systemd[1]: Created slice kubepods-besteffort-podef3e7d7c_6294_48e4_823e_22cb8561f38f.slice - libcontainer container kubepods-besteffort-podef3e7d7c_6294_48e4_823e_22cb8561f38f.slice. Jan 14 01:20:49.650735 kubelet[4019]: I0114 01:20:49.650658 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ef3e7d7c-6294-48e4-823e-22cb8561f38f-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-6gcvr\" (UID: \"ef3e7d7c-6294-48e4-823e-22cb8561f38f\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-6gcvr" Jan 14 01:20:49.650735 kubelet[4019]: I0114 01:20:49.650689 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n4l9\" (UniqueName: \"kubernetes.io/projected/ef3e7d7c-6294-48e4-823e-22cb8561f38f-kube-api-access-7n4l9\") pod \"tigera-operator-65cdcdfd6d-6gcvr\" (UID: \"ef3e7d7c-6294-48e4-823e-22cb8561f38f\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-6gcvr" Jan 14 01:20:49.717142 containerd[2507]: time="2026-01-14T01:20:49.716962486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jpcvz,Uid:d18e4311-1df9-4d0b-b7f7-a85d6fd0f535,Namespace:kube-system,Attempt:0,}" Jan 14 01:20:49.755586 containerd[2507]: time="2026-01-14T01:20:49.755104488Z" level=info msg="connecting to shim 3d7d9ce9a8c99e1f5608e67eef26deef4cb26c0b7ae9692b25d113c44bab36c7" address="unix:///run/containerd/s/ea230d7d9869bd2ac674c6db1eb70cae7d0347adb0b00f66fb6640961d9bb7ca" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:20:49.780660 systemd[1]: Started cri-containerd-3d7d9ce9a8c99e1f5608e67eef26deef4cb26c0b7ae9692b25d113c44bab36c7.scope - libcontainer container 3d7d9ce9a8c99e1f5608e67eef26deef4cb26c0b7ae9692b25d113c44bab36c7. Jan 14 01:20:49.785000 audit: BPF prog-id=157 op=LOAD Jan 14 01:20:49.788563 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 14 01:20:49.788599 kernel: audit: type=1334 audit(1768353649.785:471): prog-id=157 op=LOAD Jan 14 01:20:49.785000 audit: BPF prog-id=158 op=LOAD Jan 14 01:20:49.790957 kernel: audit: type=1334 audit(1768353649.785:472): prog-id=158 op=LOAD Jan 14 01:20:49.785000 audit[4087]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4074 pid=4087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:49.795789 kernel: audit: type=1300 audit(1768353649.785:472): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4074 pid=4087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:49.801072 kernel: audit: type=1327 audit(1768353649.785:472): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364376439636539613863393965316635363038653637656566323664 Jan 14 01:20:49.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364376439636539613863393965316635363038653637656566323664 Jan 14 01:20:49.785000 audit: BPF prog-id=158 op=UNLOAD Jan 14 01:20:49.802902 kernel: audit: type=1334 audit(1768353649.785:473): prog-id=158 op=UNLOAD Jan 14 01:20:49.785000 audit[4087]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4074 pid=4087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:49.807876 kernel: audit: type=1300 audit(1768353649.785:473): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4074 pid=4087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:49.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364376439636539613863393965316635363038653637656566323664 Jan 14 01:20:49.812544 kernel: audit: type=1327 audit(1768353649.785:473): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364376439636539613863393965316635363038653637656566323664 Jan 14 01:20:49.785000 audit: BPF prog-id=159 op=LOAD Jan 14 01:20:49.814510 kernel: audit: type=1334 audit(1768353649.785:474): prog-id=159 op=LOAD Jan 14 01:20:49.785000 audit[4087]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4074 pid=4087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:49.818369 kernel: audit: type=1300 audit(1768353649.785:474): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4074 pid=4087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:49.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364376439636539613863393965316635363038653637656566323664 Jan 14 01:20:49.821551 containerd[2507]: time="2026-01-14T01:20:49.821510369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jpcvz,Uid:d18e4311-1df9-4d0b-b7f7-a85d6fd0f535,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d7d9ce9a8c99e1f5608e67eef26deef4cb26c0b7ae9692b25d113c44bab36c7\"" Jan 14 01:20:49.823701 kernel: audit: type=1327 audit(1768353649.785:474): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364376439636539613863393965316635363038653637656566323664 Jan 14 01:20:49.785000 audit: BPF prog-id=160 op=LOAD Jan 14 01:20:49.785000 audit[4087]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4074 pid=4087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:49.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364376439636539613863393965316635363038653637656566323664 Jan 14 01:20:49.785000 audit: BPF prog-id=160 op=UNLOAD Jan 14 01:20:49.785000 audit[4087]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4074 pid=4087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:49.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364376439636539613863393965316635363038653637656566323664 Jan 14 01:20:49.785000 audit: BPF prog-id=159 op=UNLOAD Jan 14 01:20:49.785000 audit[4087]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4074 pid=4087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:49.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364376439636539613863393965316635363038653637656566323664 Jan 14 01:20:49.785000 audit: BPF prog-id=161 op=LOAD Jan 14 01:20:49.785000 audit[4087]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4074 pid=4087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:49.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3364376439636539613863393965316635363038653637656566323664 Jan 14 01:20:49.831042 containerd[2507]: time="2026-01-14T01:20:49.831007492Z" level=info msg="CreateContainer within sandbox \"3d7d9ce9a8c99e1f5608e67eef26deef4cb26c0b7ae9692b25d113c44bab36c7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 01:20:49.858886 containerd[2507]: time="2026-01-14T01:20:49.858710769Z" level=info msg="Container e104ab5ec8f67bba353d5c71e5f7935a2bf76621e0ed4d8ea5bb15ea6508c364: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:20:49.876762 containerd[2507]: time="2026-01-14T01:20:49.876739041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-6gcvr,Uid:ef3e7d7c-6294-48e4-823e-22cb8561f38f,Namespace:tigera-operator,Attempt:0,}" Jan 14 01:20:49.887579 containerd[2507]: time="2026-01-14T01:20:49.887470945Z" level=info msg="CreateContainer within sandbox \"3d7d9ce9a8c99e1f5608e67eef26deef4cb26c0b7ae9692b25d113c44bab36c7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e104ab5ec8f67bba353d5c71e5f7935a2bf76621e0ed4d8ea5bb15ea6508c364\"" Jan 14 01:20:49.889830 containerd[2507]: time="2026-01-14T01:20:49.889789915Z" level=info msg="StartContainer for \"e104ab5ec8f67bba353d5c71e5f7935a2bf76621e0ed4d8ea5bb15ea6508c364\"" Jan 14 01:20:49.892374 containerd[2507]: time="2026-01-14T01:20:49.892336878Z" level=info msg="connecting to shim e104ab5ec8f67bba353d5c71e5f7935a2bf76621e0ed4d8ea5bb15ea6508c364" address="unix:///run/containerd/s/ea230d7d9869bd2ac674c6db1eb70cae7d0347adb0b00f66fb6640961d9bb7ca" protocol=ttrpc version=3 Jan 14 01:20:49.910625 systemd[1]: Started cri-containerd-e104ab5ec8f67bba353d5c71e5f7935a2bf76621e0ed4d8ea5bb15ea6508c364.scope - libcontainer container e104ab5ec8f67bba353d5c71e5f7935a2bf76621e0ed4d8ea5bb15ea6508c364. Jan 14 01:20:49.931636 containerd[2507]: time="2026-01-14T01:20:49.931605896Z" level=info msg="connecting to shim 7c4cd301bcaaa3350ed5cd41e5932a2b92fc5a7dc57ca823a37a79bcd1c6dea1" address="unix:///run/containerd/s/baebaf6842e1468d51a089827b2f60b13009314c351f9db1da187b3fff4f6906" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:20:49.937000 audit: BPF prog-id=162 op=LOAD Jan 14 01:20:49.937000 audit[4111]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4074 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:49.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531303461623565633866363762626133353364356337316535663739 Jan 14 01:20:49.937000 audit: BPF prog-id=163 op=LOAD Jan 14 01:20:49.937000 audit[4111]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4074 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:49.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531303461623565633866363762626133353364356337316535663739 Jan 14 01:20:49.937000 audit: BPF prog-id=163 op=UNLOAD Jan 14 01:20:49.937000 audit[4111]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4074 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:49.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531303461623565633866363762626133353364356337316535663739 Jan 14 01:20:49.937000 audit: BPF prog-id=162 op=UNLOAD Jan 14 01:20:49.937000 audit[4111]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4074 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:49.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531303461623565633866363762626133353364356337316535663739 Jan 14 01:20:49.937000 audit: BPF prog-id=164 op=LOAD Jan 14 01:20:49.937000 audit[4111]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4074 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:49.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531303461623565633866363762626133353364356337316535663739 Jan 14 01:20:49.951780 systemd[1]: Started cri-containerd-7c4cd301bcaaa3350ed5cd41e5932a2b92fc5a7dc57ca823a37a79bcd1c6dea1.scope - libcontainer container 7c4cd301bcaaa3350ed5cd41e5932a2b92fc5a7dc57ca823a37a79bcd1c6dea1. Jan 14 01:20:49.961000 audit: BPF prog-id=165 op=LOAD Jan 14 01:20:49.962000 audit: BPF prog-id=166 op=LOAD Jan 14 01:20:49.962000 audit[4150]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4140 pid=4150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:49.962000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763346364333031626361616133333530656435636434316535393332 Jan 14 01:20:49.962000 audit: BPF prog-id=166 op=UNLOAD Jan 14 01:20:49.962000 audit[4150]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4140 pid=4150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:49.962000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763346364333031626361616133333530656435636434316535393332 Jan 14 01:20:49.962000 audit: BPF prog-id=167 op=LOAD Jan 14 01:20:49.962000 audit[4150]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4140 pid=4150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:49.962000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763346364333031626361616133333530656435636434316535393332 Jan 14 01:20:49.962000 audit: BPF prog-id=168 op=LOAD Jan 14 01:20:49.962000 audit[4150]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4140 pid=4150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:49.962000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763346364333031626361616133333530656435636434316535393332 Jan 14 01:20:49.962000 audit: BPF prog-id=168 op=UNLOAD Jan 14 01:20:49.962000 audit[4150]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4140 pid=4150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:49.962000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763346364333031626361616133333530656435636434316535393332 Jan 14 01:20:49.962000 audit: BPF prog-id=167 op=UNLOAD Jan 14 01:20:49.962000 audit[4150]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4140 pid=4150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:49.962000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763346364333031626361616133333530656435636434316535393332 Jan 14 01:20:49.962000 audit: BPF prog-id=169 op=LOAD Jan 14 01:20:49.962000 audit[4150]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4140 pid=4150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:49.962000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3763346364333031626361616133333530656435636434316535393332 Jan 14 01:20:49.966571 containerd[2507]: time="2026-01-14T01:20:49.966544508Z" level=info msg="StartContainer for \"e104ab5ec8f67bba353d5c71e5f7935a2bf76621e0ed4d8ea5bb15ea6508c364\" returns successfully" Jan 14 01:20:50.006580 containerd[2507]: time="2026-01-14T01:20:50.006545894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-6gcvr,Uid:ef3e7d7c-6294-48e4-823e-22cb8561f38f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7c4cd301bcaaa3350ed5cd41e5932a2b92fc5a7dc57ca823a37a79bcd1c6dea1\"" Jan 14 01:20:50.008632 containerd[2507]: time="2026-01-14T01:20:50.008567115Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 14 01:20:50.150000 audit[4223]: NETFILTER_CFG table=mangle:57 family=2 entries=1 op=nft_register_chain pid=4223 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:50.151000 audit[4224]: NETFILTER_CFG table=mangle:58 family=10 entries=1 op=nft_register_chain pid=4224 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:50.150000 audit[4223]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffea4512490 a2=0 a3=7ffea451247c items=0 ppid=4124 pid=4223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.150000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 01:20:50.151000 audit[4224]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc314fd6c0 a2=0 a3=7ffc314fd6ac items=0 ppid=4124 pid=4224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.151000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 01:20:50.156000 audit[4228]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_chain pid=4228 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:50.157000 audit[4229]: NETFILTER_CFG table=nat:60 family=10 entries=1 op=nft_register_chain pid=4229 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:50.156000 audit[4228]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd09134040 a2=0 a3=7ffd0913402c items=0 ppid=4124 pid=4228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.156000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 01:20:50.157000 audit[4229]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd287a2140 a2=0 a3=7ffd287a212c items=0 ppid=4124 pid=4229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.157000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 01:20:50.160000 audit[4230]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_chain pid=4230 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:50.160000 audit[4230]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffca36c2ac0 a2=0 a3=7ffca36c2aac items=0 ppid=4124 pid=4230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.160000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 01:20:50.160000 audit[4231]: NETFILTER_CFG table=filter:62 family=10 entries=1 op=nft_register_chain pid=4231 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:50.160000 audit[4231]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc305642e0 a2=0 a3=7ffc305642cc items=0 ppid=4124 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.160000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 01:20:50.253000 audit[4232]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=4232 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:50.253000 audit[4232]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc98ed6c80 a2=0 a3=7ffc98ed6c6c items=0 ppid=4124 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.253000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 01:20:50.256000 audit[4234]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=4234 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:50.256000 audit[4234]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffd0bf8230 a2=0 a3=7fffd0bf821c items=0 ppid=4124 pid=4234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.256000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73002D Jan 14 01:20:50.259000 audit[4237]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_rule pid=4237 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:50.259000 audit[4237]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe1598a240 a2=0 a3=7ffe1598a22c items=0 ppid=4124 pid=4237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.259000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 14 01:20:50.260000 audit[4238]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_chain pid=4238 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:50.260000 audit[4238]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce48e6680 a2=0 a3=7ffce48e666c items=0 ppid=4124 pid=4238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.260000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 01:20:50.263000 audit[4240]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=4240 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:50.263000 audit[4240]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd0718e790 a2=0 a3=7ffd0718e77c items=0 ppid=4124 pid=4240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.263000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 01:20:50.264000 audit[4241]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=4241 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:50.264000 audit[4241]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe504abd40 a2=0 a3=7ffe504abd2c items=0 ppid=4124 pid=4241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.264000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 01:20:50.266000 audit[4243]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=4243 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:50.266000 audit[4243]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcc93a02f0 a2=0 a3=7ffcc93a02dc items=0 ppid=4124 pid=4243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.266000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:20:50.269000 audit[4246]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_rule pid=4246 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:50.269000 audit[4246]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdfb85c0e0 a2=0 a3=7ffdfb85c0cc items=0 ppid=4124 pid=4246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.269000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:20:50.270000 audit[4247]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_chain pid=4247 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:50.270000 audit[4247]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe73495b90 a2=0 a3=7ffe73495b7c items=0 ppid=4124 pid=4247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.270000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 01:20:50.272000 audit[4249]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=4249 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:50.272000 audit[4249]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffc4d64a00 a2=0 a3=7fffc4d649ec items=0 ppid=4124 pid=4249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.272000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 01:20:50.272000 audit[4250]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=4250 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:50.272000 audit[4250]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcdb7687b0 a2=0 a3=7ffcdb76879c items=0 ppid=4124 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.272000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 01:20:50.275000 audit[4252]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=4252 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:50.275000 audit[4252]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc76493930 a2=0 a3=7ffc7649391c items=0 ppid=4124 pid=4252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.275000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F5859 Jan 14 01:20:50.278000 audit[4255]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_rule pid=4255 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:50.278000 audit[4255]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdf823e1d0 a2=0 a3=7ffdf823e1bc items=0 ppid=4124 pid=4255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.278000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 14 01:20:50.280000 audit[4258]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=4258 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:50.280000 audit[4258]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd8104add0 a2=0 a3=7ffd8104adbc items=0 ppid=4124 pid=4258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.280000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 14 01:20:50.281000 audit[4259]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=4259 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:50.281000 audit[4259]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc75b38b90 a2=0 a3=7ffc75b38b7c items=0 ppid=4124 pid=4259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.281000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 01:20:50.283000 audit[4261]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=4261 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:50.283000 audit[4261]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffdbe7f0190 a2=0 a3=7ffdbe7f017c items=0 ppid=4124 pid=4261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.283000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:20:50.286000 audit[4264]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_rule pid=4264 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:50.286000 audit[4264]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc55add800 a2=0 a3=7ffc55add7ec items=0 ppid=4124 pid=4264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.286000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:20:50.287000 audit[4265]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_chain pid=4265 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:50.287000 audit[4265]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc3cbf5060 a2=0 a3=7ffc3cbf504c items=0 ppid=4124 pid=4265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.287000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 01:20:50.289000 audit[4267]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=4267 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:20:50.289000 audit[4267]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffd10391690 a2=0 a3=7ffd1039167c items=0 ppid=4124 pid=4267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.289000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 01:20:50.428000 audit[4273]: NETFILTER_CFG table=filter:82 family=2 entries=8 op=nft_register_rule pid=4273 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:20:50.428000 audit[4273]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc38421c60 a2=0 a3=7ffc38421c4c items=0 ppid=4124 pid=4273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.428000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:20:50.435000 audit[4273]: NETFILTER_CFG table=nat:83 family=2 entries=14 op=nft_register_chain pid=4273 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:20:50.435000 audit[4273]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffc38421c60 a2=0 a3=7ffc38421c4c items=0 ppid=4124 pid=4273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.435000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:20:50.436000 audit[4278]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=4278 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:50.436000 audit[4278]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd89b0ab50 a2=0 a3=7ffd89b0ab3c items=0 ppid=4124 pid=4278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.436000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 01:20:50.438000 audit[4280]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=4280 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:50.438000 audit[4280]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff327c91c0 a2=0 a3=7fff327c91ac items=0 ppid=4124 pid=4280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.438000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 14 01:20:50.442000 audit[4283]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=4283 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:50.442000 audit[4283]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe6465ab50 a2=0 a3=7ffe6465ab3c items=0 ppid=4124 pid=4283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.442000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C Jan 14 01:20:50.443000 audit[4284]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=4284 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:50.443000 audit[4284]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffecd988860 a2=0 a3=7ffecd98884c items=0 ppid=4124 pid=4284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.443000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 01:20:50.445000 audit[4286]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=4286 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:50.445000 audit[4286]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffffaf496d0 a2=0 a3=7ffffaf496bc items=0 ppid=4124 pid=4286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.445000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 01:20:50.446000 audit[4287]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=4287 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:50.446000 audit[4287]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd2ace31c0 a2=0 a3=7ffd2ace31ac items=0 ppid=4124 pid=4287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.446000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 01:20:50.448000 audit[4289]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=4289 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:50.448000 audit[4289]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe645f39a0 a2=0 a3=7ffe645f398c items=0 ppid=4124 pid=4289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.448000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:20:50.452000 audit[4292]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=4292 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:50.452000 audit[4292]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff017007e0 a2=0 a3=7fff017007cc items=0 ppid=4124 pid=4292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.452000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:20:50.453000 audit[4293]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=4293 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:50.453000 audit[4293]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc3895d800 a2=0 a3=7ffc3895d7ec items=0 ppid=4124 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.453000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 01:20:50.455000 audit[4295]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=4295 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:50.455000 audit[4295]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff7d25e7b0 a2=0 a3=7fff7d25e79c items=0 ppid=4124 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.455000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 01:20:50.456000 audit[4296]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=4296 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:50.456000 audit[4296]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc21da1650 a2=0 a3=7ffc21da163c items=0 ppid=4124 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.456000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 01:20:50.458000 audit[4298]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=4298 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:50.458000 audit[4298]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd4215d960 a2=0 a3=7ffd4215d94c items=0 ppid=4124 pid=4298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.458000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 14 01:20:50.461000 audit[4301]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=4301 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:50.461000 audit[4301]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc0863b240 a2=0 a3=7ffc0863b22c items=0 ppid=4124 pid=4301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.461000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 14 01:20:50.464000 audit[4304]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=4304 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:50.464000 audit[4304]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe16641e40 a2=0 a3=7ffe16641e2c items=0 ppid=4124 pid=4304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.464000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Jan 14 01:20:50.465000 audit[4305]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=4305 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:50.465000 audit[4305]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc41473d20 a2=0 a3=7ffc41473d0c items=0 ppid=4124 pid=4305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.465000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 01:20:50.467000 audit[4307]: NETFILTER_CFG table=nat:99 family=10 entries=1 op=nft_register_rule pid=4307 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:50.467000 audit[4307]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffebfd845e0 a2=0 a3=7ffebfd845cc items=0 ppid=4124 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.467000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:20:50.470000 audit[4310]: NETFILTER_CFG table=nat:100 family=10 entries=1 op=nft_register_rule pid=4310 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:50.470000 audit[4310]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffde2d0fc10 a2=0 a3=7ffde2d0fbfc items=0 ppid=4124 pid=4310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.470000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:20:50.471000 audit[4311]: NETFILTER_CFG table=nat:101 family=10 entries=1 op=nft_register_chain pid=4311 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:50.471000 audit[4311]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe4cf82ca0 a2=0 a3=7ffe4cf82c8c items=0 ppid=4124 pid=4311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.471000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 01:20:50.473000 audit[4313]: NETFILTER_CFG table=nat:102 family=10 entries=2 op=nft_register_chain pid=4313 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:50.473000 audit[4313]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fffcae0d180 a2=0 a3=7fffcae0d16c items=0 ppid=4124 pid=4313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.473000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 01:20:50.474000 audit[4314]: NETFILTER_CFG table=filter:103 family=10 entries=1 op=nft_register_chain pid=4314 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:50.474000 audit[4314]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff07f77800 a2=0 a3=7fff07f777ec items=0 ppid=4124 pid=4314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.474000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 01:20:50.476000 audit[4316]: NETFILTER_CFG table=filter:104 family=10 entries=1 op=nft_register_rule pid=4316 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:50.476000 audit[4316]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffea2d22aa0 a2=0 a3=7ffea2d22a8c items=0 ppid=4124 pid=4316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.476000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:20:50.480000 audit[4319]: NETFILTER_CFG table=filter:105 family=10 entries=1 op=nft_register_rule pid=4319 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:20:50.480000 audit[4319]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffeb5fc23a0 a2=0 a3=7ffeb5fc238c items=0 ppid=4124 pid=4319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.480000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:20:50.484000 audit[4321]: NETFILTER_CFG table=filter:106 family=10 entries=3 op=nft_register_rule pid=4321 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 01:20:50.484000 audit[4321]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fff4da07ab0 a2=0 a3=7fff4da07a9c items=0 ppid=4124 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.484000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:20:50.484000 audit[4321]: NETFILTER_CFG table=nat:107 family=10 entries=7 op=nft_register_chain pid=4321 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 01:20:50.484000 audit[4321]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fff4da07ab0 a2=0 a3=7fff4da07a9c items=0 ppid=4124 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:50.484000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:20:50.603269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1892752850.mount: Deactivated successfully. Jan 14 01:20:51.960204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1535797471.mount: Deactivated successfully. Jan 14 01:20:52.362309 containerd[2507]: time="2026-01-14T01:20:52.362265714Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:20:52.373407 containerd[2507]: time="2026-01-14T01:20:52.373294934Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558945" Jan 14 01:20:52.375891 containerd[2507]: time="2026-01-14T01:20:52.375861961Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:20:52.379223 containerd[2507]: time="2026-01-14T01:20:52.379181243Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:20:52.379606 containerd[2507]: time="2026-01-14T01:20:52.379477071Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.370651904s" Jan 14 01:20:52.379606 containerd[2507]: time="2026-01-14T01:20:52.379516123Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 14 01:20:52.396463 containerd[2507]: time="2026-01-14T01:20:52.396408911Z" level=info msg="CreateContainer within sandbox \"7c4cd301bcaaa3350ed5cd41e5932a2b92fc5a7dc57ca823a37a79bcd1c6dea1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 14 01:20:52.416226 containerd[2507]: time="2026-01-14T01:20:52.414587267Z" level=info msg="Container 5bb426567a4f73e6260850dafde1b714dfc3e2e66b56275b7a57b2b2b5c6d1d4: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:20:52.427665 containerd[2507]: time="2026-01-14T01:20:52.427643492Z" level=info msg="CreateContainer within sandbox \"7c4cd301bcaaa3350ed5cd41e5932a2b92fc5a7dc57ca823a37a79bcd1c6dea1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5bb426567a4f73e6260850dafde1b714dfc3e2e66b56275b7a57b2b2b5c6d1d4\"" Jan 14 01:20:52.428064 containerd[2507]: time="2026-01-14T01:20:52.428033286Z" level=info msg="StartContainer for \"5bb426567a4f73e6260850dafde1b714dfc3e2e66b56275b7a57b2b2b5c6d1d4\"" Jan 14 01:20:52.428821 containerd[2507]: time="2026-01-14T01:20:52.428798230Z" level=info msg="connecting to shim 5bb426567a4f73e6260850dafde1b714dfc3e2e66b56275b7a57b2b2b5c6d1d4" address="unix:///run/containerd/s/baebaf6842e1468d51a089827b2f60b13009314c351f9db1da187b3fff4f6906" protocol=ttrpc version=3 Jan 14 01:20:52.446680 systemd[1]: Started cri-containerd-5bb426567a4f73e6260850dafde1b714dfc3e2e66b56275b7a57b2b2b5c6d1d4.scope - libcontainer container 5bb426567a4f73e6260850dafde1b714dfc3e2e66b56275b7a57b2b2b5c6d1d4. Jan 14 01:20:52.454000 audit: BPF prog-id=170 op=LOAD Jan 14 01:20:52.455000 audit: BPF prog-id=171 op=LOAD Jan 14 01:20:52.455000 audit[4331]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4140 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:52.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562623432363536376134663733653632363038353064616664653162 Jan 14 01:20:52.455000 audit: BPF prog-id=171 op=UNLOAD Jan 14 01:20:52.455000 audit[4331]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4140 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:52.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562623432363536376134663733653632363038353064616664653162 Jan 14 01:20:52.455000 audit: BPF prog-id=172 op=LOAD Jan 14 01:20:52.455000 audit[4331]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4140 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:52.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562623432363536376134663733653632363038353064616664653162 Jan 14 01:20:52.455000 audit: BPF prog-id=173 op=LOAD Jan 14 01:20:52.455000 audit[4331]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4140 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:52.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562623432363536376134663733653632363038353064616664653162 Jan 14 01:20:52.455000 audit: BPF prog-id=173 op=UNLOAD Jan 14 01:20:52.455000 audit[4331]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4140 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:52.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562623432363536376134663733653632363038353064616664653162 Jan 14 01:20:52.455000 audit: BPF prog-id=172 op=UNLOAD Jan 14 01:20:52.455000 audit[4331]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4140 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:52.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562623432363536376134663733653632363038353064616664653162 Jan 14 01:20:52.455000 audit: BPF prog-id=174 op=LOAD Jan 14 01:20:52.455000 audit[4331]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4140 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:20:52.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562623432363536376134663733653632363038353064616664653162 Jan 14 01:20:52.477745 containerd[2507]: time="2026-01-14T01:20:52.477666112Z" level=info msg="StartContainer for \"5bb426567a4f73e6260850dafde1b714dfc3e2e66b56275b7a57b2b2b5c6d1d4\" returns successfully" Jan 14 01:20:52.790889 kubelet[4019]: I0114 01:20:52.790832 4019 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-6gcvr" podStartSLOduration=1.418223157 podStartE2EDuration="3.790814713s" podCreationTimestamp="2026-01-14 01:20:49 +0000 UTC" firstStartedPulling="2026-01-14 01:20:50.007662713 +0000 UTC m=+5.362703079" lastFinishedPulling="2026-01-14 01:20:52.380254261 +0000 UTC m=+7.735294635" observedRunningTime="2026-01-14 01:20:52.790732036 +0000 UTC m=+8.145772415" watchObservedRunningTime="2026-01-14 01:20:52.790814713 +0000 UTC m=+8.145855086" Jan 14 01:20:52.791282 kubelet[4019]: I0114 01:20:52.791044 4019 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jpcvz" podStartSLOduration=3.791035508 podStartE2EDuration="3.791035508s" podCreationTimestamp="2026-01-14 01:20:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:20:50.781228084 +0000 UTC m=+6.136268483" watchObservedRunningTime="2026-01-14 01:20:52.791035508 +0000 UTC m=+8.146075889" Jan 14 01:20:59.481267 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 14 01:20:59.481379 kernel: audit: type=1106 audit(1768353659.472:551): pid=2974 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:20:59.472000 audit[2974]: USER_END pid=2974 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:20:59.472651 sudo[2974]: pam_unix(sudo:session): session closed for user root Jan 14 01:20:59.472000 audit[2974]: CRED_DISP pid=2974 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:20:59.488516 kernel: audit: type=1104 audit(1768353659.472:552): pid=2974 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:20:59.582114 sshd[2973]: Connection closed by 10.200.16.10 port 42018 Jan 14 01:20:59.582566 sshd-session[2969]: pam_unix(sshd:session): session closed for user core Jan 14 01:20:59.582000 audit[2969]: USER_END pid=2969 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:20:59.585557 systemd-logind[2479]: Session 10 logged out. Waiting for processes to exit. Jan 14 01:20:59.587926 systemd[1]: sshd@6-10.200.4.7:22-10.200.16.10:42018.service: Deactivated successfully. Jan 14 01:20:59.590390 systemd[1]: session-10.scope: Deactivated successfully. Jan 14 01:20:59.590804 systemd[1]: session-10.scope: Consumed 3.996s CPU time, 233.8M memory peak. Jan 14 01:20:59.591507 kernel: audit: type=1106 audit(1768353659.582:553): pid=2969 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:20:59.594400 systemd-logind[2479]: Removed session 10. Jan 14 01:20:59.583000 audit[2969]: CRED_DISP pid=2969 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:20:59.604509 kernel: audit: type=1104 audit(1768353659.583:554): pid=2969 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:20:59.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.4.7:22-10.200.16.10:42018 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:20:59.615514 kernel: audit: type=1131 audit(1768353659.583:555): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.4.7:22-10.200.16.10:42018 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:21:00.518000 audit[4414]: NETFILTER_CFG table=filter:108 family=2 entries=15 op=nft_register_rule pid=4414 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:00.525872 kernel: audit: type=1325 audit(1768353660.518:556): table=filter:108 family=2 entries=15 op=nft_register_rule pid=4414 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:00.518000 audit[4414]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcbf54cea0 a2=0 a3=7ffcbf54ce8c items=0 ppid=4124 pid=4414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:00.538160 kernel: audit: type=1300 audit(1768353660.518:556): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcbf54cea0 a2=0 a3=7ffcbf54ce8c items=0 ppid=4124 pid=4414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:00.518000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:00.551069 kernel: audit: type=1327 audit(1768353660.518:556): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:00.551129 kernel: audit: type=1325 audit(1768353660.539:557): table=nat:109 family=2 entries=12 op=nft_register_rule pid=4414 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:00.539000 audit[4414]: NETFILTER_CFG table=nat:109 family=2 entries=12 op=nft_register_rule pid=4414 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:00.539000 audit[4414]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcbf54cea0 a2=0 a3=0 items=0 ppid=4124 pid=4414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:00.561507 kernel: audit: type=1300 audit(1768353660.539:557): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcbf54cea0 a2=0 a3=0 items=0 ppid=4124 pid=4414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:00.539000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:01.571000 audit[4416]: NETFILTER_CFG table=filter:110 family=2 entries=16 op=nft_register_rule pid=4416 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:01.571000 audit[4416]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd042d78f0 a2=0 a3=7ffd042d78dc items=0 ppid=4124 pid=4416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:01.571000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:01.577000 audit[4416]: NETFILTER_CFG table=nat:111 family=2 entries=12 op=nft_register_rule pid=4416 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:01.577000 audit[4416]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd042d78f0 a2=0 a3=0 items=0 ppid=4124 pid=4416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:01.577000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:02.590000 audit[4418]: NETFILTER_CFG table=filter:112 family=2 entries=18 op=nft_register_rule pid=4418 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:02.590000 audit[4418]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdaa7b6e40 a2=0 a3=7ffdaa7b6e2c items=0 ppid=4124 pid=4418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:02.590000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:02.595000 audit[4418]: NETFILTER_CFG table=nat:113 family=2 entries=12 op=nft_register_rule pid=4418 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:02.595000 audit[4418]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdaa7b6e40 a2=0 a3=0 items=0 ppid=4124 pid=4418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:02.595000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:03.950000 audit[4420]: NETFILTER_CFG table=filter:114 family=2 entries=21 op=nft_register_rule pid=4420 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:03.950000 audit[4420]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffcdb5709b0 a2=0 a3=7ffcdb57099c items=0 ppid=4124 pid=4420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:03.950000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:03.954000 audit[4420]: NETFILTER_CFG table=nat:115 family=2 entries=12 op=nft_register_rule pid=4420 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:03.954000 audit[4420]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcdb5709b0 a2=0 a3=0 items=0 ppid=4124 pid=4420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:03.954000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:04.003649 systemd[1]: Created slice kubepods-besteffort-pod370952c2_d049_420b_a34a_b8c72e368198.slice - libcontainer container kubepods-besteffort-pod370952c2_d049_420b_a34a_b8c72e368198.slice. Jan 14 01:21:04.044880 kubelet[4019]: I0114 01:21:04.044857 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cddl7\" (UniqueName: \"kubernetes.io/projected/370952c2-d049-420b-a34a-b8c72e368198-kube-api-access-cddl7\") pod \"calico-typha-74cb7cd7bd-t4s2r\" (UID: \"370952c2-d049-420b-a34a-b8c72e368198\") " pod="calico-system/calico-typha-74cb7cd7bd-t4s2r" Jan 14 01:21:04.045320 kubelet[4019]: I0114 01:21:04.045226 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/370952c2-d049-420b-a34a-b8c72e368198-typha-certs\") pod \"calico-typha-74cb7cd7bd-t4s2r\" (UID: \"370952c2-d049-420b-a34a-b8c72e368198\") " pod="calico-system/calico-typha-74cb7cd7bd-t4s2r" Jan 14 01:21:04.045320 kubelet[4019]: I0114 01:21:04.045290 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/370952c2-d049-420b-a34a-b8c72e368198-tigera-ca-bundle\") pod \"calico-typha-74cb7cd7bd-t4s2r\" (UID: \"370952c2-d049-420b-a34a-b8c72e368198\") " pod="calico-system/calico-typha-74cb7cd7bd-t4s2r" Jan 14 01:21:04.141608 systemd[1]: Created slice kubepods-besteffort-pod9a88546b_ae5f_453d_89ee_751ad052b7c6.slice - libcontainer container kubepods-besteffort-pod9a88546b_ae5f_453d_89ee_751ad052b7c6.slice. Jan 14 01:21:04.146245 kubelet[4019]: I0114 01:21:04.146225 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a88546b-ae5f-453d-89ee-751ad052b7c6-xtables-lock\") pod \"calico-node-d8gs4\" (UID: \"9a88546b-ae5f-453d-89ee-751ad052b7c6\") " pod="calico-system/calico-node-d8gs4" Jan 14 01:21:04.146245 kubelet[4019]: I0114 01:21:04.146251 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9a88546b-ae5f-453d-89ee-751ad052b7c6-var-lib-calico\") pod \"calico-node-d8gs4\" (UID: \"9a88546b-ae5f-453d-89ee-751ad052b7c6\") " pod="calico-system/calico-node-d8gs4" Jan 14 01:21:04.146348 kubelet[4019]: I0114 01:21:04.146267 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8ljx\" (UniqueName: \"kubernetes.io/projected/9a88546b-ae5f-453d-89ee-751ad052b7c6-kube-api-access-b8ljx\") pod \"calico-node-d8gs4\" (UID: \"9a88546b-ae5f-453d-89ee-751ad052b7c6\") " pod="calico-system/calico-node-d8gs4" Jan 14 01:21:04.146348 kubelet[4019]: I0114 01:21:04.146281 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9a88546b-ae5f-453d-89ee-751ad052b7c6-policysync\") pod \"calico-node-d8gs4\" (UID: \"9a88546b-ae5f-453d-89ee-751ad052b7c6\") " pod="calico-system/calico-node-d8gs4" Jan 14 01:21:04.146348 kubelet[4019]: I0114 01:21:04.146303 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9a88546b-ae5f-453d-89ee-751ad052b7c6-cni-net-dir\") pod \"calico-node-d8gs4\" (UID: \"9a88546b-ae5f-453d-89ee-751ad052b7c6\") " pod="calico-system/calico-node-d8gs4" Jan 14 01:21:04.146348 kubelet[4019]: I0114 01:21:04.146316 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9a88546b-ae5f-453d-89ee-751ad052b7c6-node-certs\") pod \"calico-node-d8gs4\" (UID: \"9a88546b-ae5f-453d-89ee-751ad052b7c6\") " pod="calico-system/calico-node-d8gs4" Jan 14 01:21:04.146348 kubelet[4019]: I0114 01:21:04.146332 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a88546b-ae5f-453d-89ee-751ad052b7c6-tigera-ca-bundle\") pod \"calico-node-d8gs4\" (UID: \"9a88546b-ae5f-453d-89ee-751ad052b7c6\") " pod="calico-system/calico-node-d8gs4" Jan 14 01:21:04.146449 kubelet[4019]: I0114 01:21:04.146347 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9a88546b-ae5f-453d-89ee-751ad052b7c6-cni-bin-dir\") pod \"calico-node-d8gs4\" (UID: \"9a88546b-ae5f-453d-89ee-751ad052b7c6\") " pod="calico-system/calico-node-d8gs4" Jan 14 01:21:04.146449 kubelet[4019]: I0114 01:21:04.146363 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9a88546b-ae5f-453d-89ee-751ad052b7c6-cni-log-dir\") pod \"calico-node-d8gs4\" (UID: \"9a88546b-ae5f-453d-89ee-751ad052b7c6\") " pod="calico-system/calico-node-d8gs4" Jan 14 01:21:04.146449 kubelet[4019]: I0114 01:21:04.146389 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9a88546b-ae5f-453d-89ee-751ad052b7c6-flexvol-driver-host\") pod \"calico-node-d8gs4\" (UID: \"9a88546b-ae5f-453d-89ee-751ad052b7c6\") " pod="calico-system/calico-node-d8gs4" Jan 14 01:21:04.146449 kubelet[4019]: I0114 01:21:04.146407 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a88546b-ae5f-453d-89ee-751ad052b7c6-lib-modules\") pod \"calico-node-d8gs4\" (UID: \"9a88546b-ae5f-453d-89ee-751ad052b7c6\") " pod="calico-system/calico-node-d8gs4" Jan 14 01:21:04.146449 kubelet[4019]: I0114 01:21:04.146423 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9a88546b-ae5f-453d-89ee-751ad052b7c6-var-run-calico\") pod \"calico-node-d8gs4\" (UID: \"9a88546b-ae5f-453d-89ee-751ad052b7c6\") " pod="calico-system/calico-node-d8gs4" Jan 14 01:21:04.255023 kubelet[4019]: E0114 01:21:04.254772 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.255023 kubelet[4019]: W0114 01:21:04.254787 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.255023 kubelet[4019]: E0114 01:21:04.254824 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.258939 kubelet[4019]: E0114 01:21:04.258922 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.259045 kubelet[4019]: W0114 01:21:04.259031 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.259092 kubelet[4019]: E0114 01:21:04.259084 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.313296 containerd[2507]: time="2026-01-14T01:21:04.313039630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74cb7cd7bd-t4s2r,Uid:370952c2-d049-420b-a34a-b8c72e368198,Namespace:calico-system,Attempt:0,}" Jan 14 01:21:04.316620 kubelet[4019]: E0114 01:21:04.316470 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c7dnf" podUID="19451c9d-d740-439e-ba98-ce86a4dce532" Jan 14 01:21:04.339275 kubelet[4019]: E0114 01:21:04.339183 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.339275 kubelet[4019]: W0114 01:21:04.339206 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.339275 kubelet[4019]: E0114 01:21:04.339220 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.339393 kubelet[4019]: E0114 01:21:04.339340 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.339393 kubelet[4019]: W0114 01:21:04.339349 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.339393 kubelet[4019]: E0114 01:21:04.339359 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.339517 kubelet[4019]: E0114 01:21:04.339507 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.339517 kubelet[4019]: W0114 01:21:04.339515 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.339571 kubelet[4019]: E0114 01:21:04.339522 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.339682 kubelet[4019]: E0114 01:21:04.339673 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.339682 kubelet[4019]: W0114 01:21:04.339681 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.339730 kubelet[4019]: E0114 01:21:04.339688 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.339824 kubelet[4019]: E0114 01:21:04.339803 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.339887 kubelet[4019]: W0114 01:21:04.339876 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.339887 kubelet[4019]: E0114 01:21:04.339886 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.339997 kubelet[4019]: E0114 01:21:04.339990 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.340068 kubelet[4019]: W0114 01:21:04.340035 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.340068 kubelet[4019]: E0114 01:21:04.340044 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.340164 kubelet[4019]: E0114 01:21:04.340155 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.340164 kubelet[4019]: W0114 01:21:04.340162 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.340208 kubelet[4019]: E0114 01:21:04.340169 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.340264 kubelet[4019]: E0114 01:21:04.340260 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.340286 kubelet[4019]: W0114 01:21:04.340264 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.340286 kubelet[4019]: E0114 01:21:04.340270 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.340385 kubelet[4019]: E0114 01:21:04.340375 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.340385 kubelet[4019]: W0114 01:21:04.340383 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.340438 kubelet[4019]: E0114 01:21:04.340390 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.340553 kubelet[4019]: E0114 01:21:04.340543 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.340553 kubelet[4019]: W0114 01:21:04.340551 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.341537 kubelet[4019]: E0114 01:21:04.340558 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.341537 kubelet[4019]: E0114 01:21:04.340675 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.341537 kubelet[4019]: W0114 01:21:04.340680 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.341537 kubelet[4019]: E0114 01:21:04.340685 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.341537 kubelet[4019]: E0114 01:21:04.340765 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.341537 kubelet[4019]: W0114 01:21:04.340770 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.341537 kubelet[4019]: E0114 01:21:04.340776 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.341537 kubelet[4019]: E0114 01:21:04.340858 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.341537 kubelet[4019]: W0114 01:21:04.340862 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.341537 kubelet[4019]: E0114 01:21:04.340866 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.341758 kubelet[4019]: E0114 01:21:04.340941 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.341758 kubelet[4019]: W0114 01:21:04.340946 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.341758 kubelet[4019]: E0114 01:21:04.340950 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.341758 kubelet[4019]: E0114 01:21:04.341034 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.341758 kubelet[4019]: W0114 01:21:04.341038 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.341758 kubelet[4019]: E0114 01:21:04.341042 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.341758 kubelet[4019]: E0114 01:21:04.341121 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.341758 kubelet[4019]: W0114 01:21:04.341125 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.341758 kubelet[4019]: E0114 01:21:04.341129 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.341758 kubelet[4019]: E0114 01:21:04.341213 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.341964 kubelet[4019]: W0114 01:21:04.341216 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.341964 kubelet[4019]: E0114 01:21:04.341220 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.341964 kubelet[4019]: E0114 01:21:04.341305 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.341964 kubelet[4019]: W0114 01:21:04.341309 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.341964 kubelet[4019]: E0114 01:21:04.341313 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.341964 kubelet[4019]: E0114 01:21:04.341405 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.341964 kubelet[4019]: W0114 01:21:04.341414 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.341964 kubelet[4019]: E0114 01:21:04.341419 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.341964 kubelet[4019]: E0114 01:21:04.341680 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.341964 kubelet[4019]: W0114 01:21:04.341687 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.342178 kubelet[4019]: E0114 01:21:04.341696 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.347446 kubelet[4019]: E0114 01:21:04.347430 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.347446 kubelet[4019]: W0114 01:21:04.347444 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.347605 kubelet[4019]: E0114 01:21:04.347455 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.347605 kubelet[4019]: I0114 01:21:04.347473 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/19451c9d-d740-439e-ba98-ce86a4dce532-kubelet-dir\") pod \"csi-node-driver-c7dnf\" (UID: \"19451c9d-d740-439e-ba98-ce86a4dce532\") " pod="calico-system/csi-node-driver-c7dnf" Jan 14 01:21:04.347605 kubelet[4019]: E0114 01:21:04.347590 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.347605 kubelet[4019]: W0114 01:21:04.347596 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.347605 kubelet[4019]: E0114 01:21:04.347603 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.347759 kubelet[4019]: I0114 01:21:04.347617 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/19451c9d-d740-439e-ba98-ce86a4dce532-registration-dir\") pod \"csi-node-driver-c7dnf\" (UID: \"19451c9d-d740-439e-ba98-ce86a4dce532\") " pod="calico-system/csi-node-driver-c7dnf" Jan 14 01:21:04.347759 kubelet[4019]: E0114 01:21:04.347710 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.347759 kubelet[4019]: W0114 01:21:04.347716 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.347759 kubelet[4019]: E0114 01:21:04.347722 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.347759 kubelet[4019]: I0114 01:21:04.347733 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/19451c9d-d740-439e-ba98-ce86a4dce532-socket-dir\") pod \"csi-node-driver-c7dnf\" (UID: \"19451c9d-d740-439e-ba98-ce86a4dce532\") " pod="calico-system/csi-node-driver-c7dnf" Jan 14 01:21:04.347922 kubelet[4019]: E0114 01:21:04.347847 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.347922 kubelet[4019]: W0114 01:21:04.347852 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.347922 kubelet[4019]: E0114 01:21:04.347858 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.347922 kubelet[4019]: I0114 01:21:04.347871 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/19451c9d-d740-439e-ba98-ce86a4dce532-varrun\") pod \"csi-node-driver-c7dnf\" (UID: \"19451c9d-d740-439e-ba98-ce86a4dce532\") " pod="calico-system/csi-node-driver-c7dnf" Jan 14 01:21:04.348071 kubelet[4019]: E0114 01:21:04.347980 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.348071 kubelet[4019]: W0114 01:21:04.347988 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.348071 kubelet[4019]: E0114 01:21:04.347996 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.348154 kubelet[4019]: E0114 01:21:04.348124 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.348154 kubelet[4019]: W0114 01:21:04.348130 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.348154 kubelet[4019]: E0114 01:21:04.348137 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.348247 kubelet[4019]: E0114 01:21:04.348237 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.348247 kubelet[4019]: W0114 01:21:04.348244 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.348247 kubelet[4019]: E0114 01:21:04.348251 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.348358 kubelet[4019]: E0114 01:21:04.348350 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.348358 kubelet[4019]: W0114 01:21:04.348357 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.348438 kubelet[4019]: E0114 01:21:04.348363 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.348464 kubelet[4019]: E0114 01:21:04.348461 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.348529 kubelet[4019]: W0114 01:21:04.348466 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.348529 kubelet[4019]: E0114 01:21:04.348471 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.348641 kubelet[4019]: E0114 01:21:04.348582 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.348641 kubelet[4019]: W0114 01:21:04.348587 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.348641 kubelet[4019]: E0114 01:21:04.348593 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.348707 kubelet[4019]: E0114 01:21:04.348681 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.348707 kubelet[4019]: W0114 01:21:04.348686 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.348707 kubelet[4019]: E0114 01:21:04.348691 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.348764 kubelet[4019]: I0114 01:21:04.348713 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjxx5\" (UniqueName: \"kubernetes.io/projected/19451c9d-d740-439e-ba98-ce86a4dce532-kube-api-access-fjxx5\") pod \"csi-node-driver-c7dnf\" (UID: \"19451c9d-d740-439e-ba98-ce86a4dce532\") " pod="calico-system/csi-node-driver-c7dnf" Jan 14 01:21:04.348845 kubelet[4019]: E0114 01:21:04.348832 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.348845 kubelet[4019]: W0114 01:21:04.348842 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.348944 kubelet[4019]: E0114 01:21:04.348850 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.348944 kubelet[4019]: E0114 01:21:04.348941 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.348985 kubelet[4019]: W0114 01:21:04.348945 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.348985 kubelet[4019]: E0114 01:21:04.348951 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.349069 kubelet[4019]: E0114 01:21:04.349034 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.349069 kubelet[4019]: W0114 01:21:04.349039 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.349069 kubelet[4019]: E0114 01:21:04.349044 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.349162 kubelet[4019]: E0114 01:21:04.349136 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.349162 kubelet[4019]: W0114 01:21:04.349141 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.349162 kubelet[4019]: E0114 01:21:04.349147 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.358971 containerd[2507]: time="2026-01-14T01:21:04.358908701Z" level=info msg="connecting to shim f024e54f221e0e1ce157c69c0040e4abdc8a04fa0837eceee2bca78d338885d4" address="unix:///run/containerd/s/ed1a487c729406d323e0aa007ded0eae2f5d51710d61fe3f82b70097f3c47806" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:21:04.377655 systemd[1]: Started cri-containerd-f024e54f221e0e1ce157c69c0040e4abdc8a04fa0837eceee2bca78d338885d4.scope - libcontainer container f024e54f221e0e1ce157c69c0040e4abdc8a04fa0837eceee2bca78d338885d4. Jan 14 01:21:04.385000 audit: BPF prog-id=175 op=LOAD Jan 14 01:21:04.386000 audit: BPF prog-id=176 op=LOAD Jan 14 01:21:04.386000 audit[4494]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=4482 pid=4494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:04.386000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630323465353466323231653065316365313537633639633030343065 Jan 14 01:21:04.386000 audit: BPF prog-id=176 op=UNLOAD Jan 14 01:21:04.386000 audit[4494]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4482 pid=4494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:04.386000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630323465353466323231653065316365313537633639633030343065 Jan 14 01:21:04.386000 audit: BPF prog-id=177 op=LOAD Jan 14 01:21:04.386000 audit[4494]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=4482 pid=4494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:04.386000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630323465353466323231653065316365313537633639633030343065 Jan 14 01:21:04.386000 audit: BPF prog-id=178 op=LOAD Jan 14 01:21:04.386000 audit[4494]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=4482 pid=4494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:04.386000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630323465353466323231653065316365313537633639633030343065 Jan 14 01:21:04.387000 audit: BPF prog-id=178 op=UNLOAD Jan 14 01:21:04.387000 audit[4494]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4482 pid=4494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:04.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630323465353466323231653065316365313537633639633030343065 Jan 14 01:21:04.387000 audit: BPF prog-id=177 op=UNLOAD Jan 14 01:21:04.387000 audit[4494]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4482 pid=4494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:04.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630323465353466323231653065316365313537633639633030343065 Jan 14 01:21:04.387000 audit: BPF prog-id=179 op=LOAD Jan 14 01:21:04.387000 audit[4494]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=4482 pid=4494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:04.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630323465353466323231653065316365313537633639633030343065 Jan 14 01:21:04.434074 containerd[2507]: time="2026-01-14T01:21:04.434044788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74cb7cd7bd-t4s2r,Uid:370952c2-d049-420b-a34a-b8c72e368198,Namespace:calico-system,Attempt:0,} returns sandbox id \"f024e54f221e0e1ce157c69c0040e4abdc8a04fa0837eceee2bca78d338885d4\"" Jan 14 01:21:04.436397 containerd[2507]: time="2026-01-14T01:21:04.436372599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 14 01:21:04.449940 kubelet[4019]: E0114 01:21:04.449923 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.449940 kubelet[4019]: W0114 01:21:04.449937 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.450201 kubelet[4019]: E0114 01:21:04.449950 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.450201 kubelet[4019]: E0114 01:21:04.450074 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.450201 kubelet[4019]: W0114 01:21:04.450079 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.450201 kubelet[4019]: E0114 01:21:04.450087 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.450201 kubelet[4019]: E0114 01:21:04.450176 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.450201 kubelet[4019]: W0114 01:21:04.450180 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.450201 kubelet[4019]: E0114 01:21:04.450186 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.450554 kubelet[4019]: E0114 01:21:04.450274 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.450554 kubelet[4019]: W0114 01:21:04.450278 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.450554 kubelet[4019]: E0114 01:21:04.450284 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.450554 kubelet[4019]: E0114 01:21:04.450368 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.450554 kubelet[4019]: W0114 01:21:04.450372 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.450554 kubelet[4019]: E0114 01:21:04.450378 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.450554 kubelet[4019]: E0114 01:21:04.450511 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.450554 kubelet[4019]: W0114 01:21:04.450517 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.450554 kubelet[4019]: E0114 01:21:04.450523 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.450953 kubelet[4019]: E0114 01:21:04.450598 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.450953 kubelet[4019]: W0114 01:21:04.450603 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.450953 kubelet[4019]: E0114 01:21:04.450609 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.450953 kubelet[4019]: E0114 01:21:04.450674 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.450953 kubelet[4019]: W0114 01:21:04.450678 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.450953 kubelet[4019]: E0114 01:21:04.450684 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.450953 kubelet[4019]: E0114 01:21:04.450779 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.450953 kubelet[4019]: W0114 01:21:04.450784 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.450953 kubelet[4019]: E0114 01:21:04.450791 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.451349 kubelet[4019]: E0114 01:21:04.451158 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.451349 kubelet[4019]: W0114 01:21:04.451168 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.451395 containerd[2507]: time="2026-01-14T01:21:04.450965637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d8gs4,Uid:9a88546b-ae5f-453d-89ee-751ad052b7c6,Namespace:calico-system,Attempt:0,}" Jan 14 01:21:04.452053 kubelet[4019]: E0114 01:21:04.451180 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.452053 kubelet[4019]: E0114 01:21:04.451663 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.452053 kubelet[4019]: W0114 01:21:04.451678 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.452053 kubelet[4019]: E0114 01:21:04.451689 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.452053 kubelet[4019]: E0114 01:21:04.451974 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.452053 kubelet[4019]: W0114 01:21:04.451983 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.452218 kubelet[4019]: E0114 01:21:04.451993 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.452366 kubelet[4019]: E0114 01:21:04.452356 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.452366 kubelet[4019]: W0114 01:21:04.452365 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.452441 kubelet[4019]: E0114 01:21:04.452376 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.452650 kubelet[4019]: E0114 01:21:04.452641 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.452723 kubelet[4019]: W0114 01:21:04.452650 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.452747 kubelet[4019]: E0114 01:21:04.452728 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.453040 kubelet[4019]: E0114 01:21:04.453005 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.453195 kubelet[4019]: W0114 01:21:04.453016 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.453195 kubelet[4019]: E0114 01:21:04.453109 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.453306 kubelet[4019]: E0114 01:21:04.453299 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.453342 kubelet[4019]: W0114 01:21:04.453335 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.453378 kubelet[4019]: E0114 01:21:04.453372 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.453571 kubelet[4019]: E0114 01:21:04.453541 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.453571 kubelet[4019]: W0114 01:21:04.453548 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.453571 kubelet[4019]: E0114 01:21:04.453555 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.454061 kubelet[4019]: E0114 01:21:04.453922 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.454061 kubelet[4019]: W0114 01:21:04.453930 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.454061 kubelet[4019]: E0114 01:21:04.453937 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.454213 kubelet[4019]: E0114 01:21:04.454189 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.454213 kubelet[4019]: W0114 01:21:04.454196 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.454213 kubelet[4019]: E0114 01:21:04.454204 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.454450 kubelet[4019]: E0114 01:21:04.454427 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.454450 kubelet[4019]: W0114 01:21:04.454434 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.454450 kubelet[4019]: E0114 01:21:04.454442 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.454771 kubelet[4019]: E0114 01:21:04.454763 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.454815 kubelet[4019]: W0114 01:21:04.454809 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.454859 kubelet[4019]: E0114 01:21:04.454846 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.455055 kubelet[4019]: E0114 01:21:04.455034 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.455055 kubelet[4019]: W0114 01:21:04.455040 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.455055 kubelet[4019]: E0114 01:21:04.455047 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.455228 kubelet[4019]: E0114 01:21:04.455211 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.455228 kubelet[4019]: W0114 01:21:04.455217 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.455228 kubelet[4019]: E0114 01:21:04.455222 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.455430 kubelet[4019]: E0114 01:21:04.455413 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.455430 kubelet[4019]: W0114 01:21:04.455418 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.455430 kubelet[4019]: E0114 01:21:04.455423 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.455637 kubelet[4019]: E0114 01:21:04.455615 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.455637 kubelet[4019]: W0114 01:21:04.455621 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.455637 kubelet[4019]: E0114 01:21:04.455627 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.459597 kubelet[4019]: E0114 01:21:04.459569 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:04.459703 kubelet[4019]: W0114 01:21:04.459581 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:04.459703 kubelet[4019]: E0114 01:21:04.459680 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:04.501504 containerd[2507]: time="2026-01-14T01:21:04.501306026Z" level=info msg="connecting to shim 29cd7e744eef5d4066ad63cf24ec0a64eb81061616de4d3ea22c90adf5ed9cab" address="unix:///run/containerd/s/b2637069f4721836b63a827585d0a9d53da69f2839dac06854a8459904e9c04a" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:21:04.517662 systemd[1]: Started cri-containerd-29cd7e744eef5d4066ad63cf24ec0a64eb81061616de4d3ea22c90adf5ed9cab.scope - libcontainer container 29cd7e744eef5d4066ad63cf24ec0a64eb81061616de4d3ea22c90adf5ed9cab. Jan 14 01:21:04.527000 audit: BPF prog-id=180 op=LOAD Jan 14 01:21:04.529247 kernel: kauditd_printk_skb: 41 callbacks suppressed Jan 14 01:21:04.529282 kernel: audit: type=1334 audit(1768353664.527:572): prog-id=180 op=LOAD Jan 14 01:21:04.529000 audit: BPF prog-id=181 op=LOAD Jan 14 01:21:04.533367 kernel: audit: type=1334 audit(1768353664.529:573): prog-id=181 op=LOAD Jan 14 01:21:04.529000 audit[4566]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4556 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:04.538140 kernel: audit: type=1300 audit(1768353664.529:573): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4556 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:04.539558 kernel: audit: type=1327 audit(1768353664.529:573): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239636437653734346565663564343036366164363363663234656330 Jan 14 01:21:04.529000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239636437653734346565663564343036366164363363663234656330 Jan 14 01:21:04.530000 audit: BPF prog-id=181 op=UNLOAD Jan 14 01:21:04.547497 kernel: audit: type=1334 audit(1768353664.530:574): prog-id=181 op=UNLOAD Jan 14 01:21:04.530000 audit[4566]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4556 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:04.530000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239636437653734346565663564343036366164363363663234656330 Jan 14 01:21:04.556072 kernel: audit: type=1300 audit(1768353664.530:574): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4556 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:04.556159 kernel: audit: type=1327 audit(1768353664.530:574): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239636437653734346565663564343036366164363363663234656330 Jan 14 01:21:04.530000 audit: BPF prog-id=182 op=LOAD Jan 14 01:21:04.558003 kernel: audit: type=1334 audit(1768353664.530:575): prog-id=182 op=LOAD Jan 14 01:21:04.561263 kernel: audit: type=1300 audit(1768353664.530:575): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4556 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:04.530000 audit[4566]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4556 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:04.530000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239636437653734346565663564343036366164363363663234656330 Jan 14 01:21:04.566816 kernel: audit: type=1327 audit(1768353664.530:575): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239636437653734346565663564343036366164363363663234656330 Jan 14 01:21:04.566864 containerd[2507]: time="2026-01-14T01:21:04.565687340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d8gs4,Uid:9a88546b-ae5f-453d-89ee-751ad052b7c6,Namespace:calico-system,Attempt:0,} returns sandbox id \"29cd7e744eef5d4066ad63cf24ec0a64eb81061616de4d3ea22c90adf5ed9cab\"" Jan 14 01:21:04.530000 audit: BPF prog-id=183 op=LOAD Jan 14 01:21:04.530000 audit[4566]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4556 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:04.530000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239636437653734346565663564343036366164363363663234656330 Jan 14 01:21:04.530000 audit: BPF prog-id=183 op=UNLOAD Jan 14 01:21:04.530000 audit[4566]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4556 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:04.530000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239636437653734346565663564343036366164363363663234656330 Jan 14 01:21:04.530000 audit: BPF prog-id=182 op=UNLOAD Jan 14 01:21:04.530000 audit[4566]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4556 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:04.530000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239636437653734346565663564343036366164363363663234656330 Jan 14 01:21:04.530000 audit: BPF prog-id=184 op=LOAD Jan 14 01:21:04.530000 audit[4566]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4556 pid=4566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:04.530000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239636437653734346565663564343036366164363363663234656330 Jan 14 01:21:04.963000 audit[4594]: NETFILTER_CFG table=filter:116 family=2 entries=22 op=nft_register_rule pid=4594 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:04.963000 audit[4594]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe59a47c60 a2=0 a3=7ffe59a47c4c items=0 ppid=4124 pid=4594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:04.963000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:04.966000 audit[4594]: NETFILTER_CFG table=nat:117 family=2 entries=12 op=nft_register_rule pid=4594 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:04.966000 audit[4594]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe59a47c60 a2=0 a3=0 items=0 ppid=4124 pid=4594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:04.966000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:05.738421 kubelet[4019]: E0114 01:21:05.738347 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c7dnf" podUID="19451c9d-d740-439e-ba98-ce86a4dce532" Jan 14 01:21:05.822689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4242251845.mount: Deactivated successfully. Jan 14 01:21:07.190543 containerd[2507]: time="2026-01-14T01:21:07.190505759Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:21:07.193758 containerd[2507]: time="2026-01-14T01:21:07.193675651Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33736634" Jan 14 01:21:07.200350 containerd[2507]: time="2026-01-14T01:21:07.200315945Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:21:07.207692 containerd[2507]: time="2026-01-14T01:21:07.207632994Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:21:07.208200 containerd[2507]: time="2026-01-14T01:21:07.208179972Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.771776902s" Jan 14 01:21:07.208256 containerd[2507]: time="2026-01-14T01:21:07.208206013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 14 01:21:07.209068 containerd[2507]: time="2026-01-14T01:21:07.209045488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 14 01:21:07.227643 containerd[2507]: time="2026-01-14T01:21:07.227619245Z" level=info msg="CreateContainer within sandbox \"f024e54f221e0e1ce157c69c0040e4abdc8a04fa0837eceee2bca78d338885d4\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 14 01:21:07.260448 containerd[2507]: time="2026-01-14T01:21:07.260421858Z" level=info msg="Container 754383308bd0769348a3fdb99514d2b20404a10063e69564fc766a33e1b52de0: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:21:07.265551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1459601588.mount: Deactivated successfully. Jan 14 01:21:07.289671 containerd[2507]: time="2026-01-14T01:21:07.289649179Z" level=info msg="CreateContainer within sandbox \"f024e54f221e0e1ce157c69c0040e4abdc8a04fa0837eceee2bca78d338885d4\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"754383308bd0769348a3fdb99514d2b20404a10063e69564fc766a33e1b52de0\"" Jan 14 01:21:07.290988 containerd[2507]: time="2026-01-14T01:21:07.290062712Z" level=info msg="StartContainer for \"754383308bd0769348a3fdb99514d2b20404a10063e69564fc766a33e1b52de0\"" Jan 14 01:21:07.291102 containerd[2507]: time="2026-01-14T01:21:07.291054815Z" level=info msg="connecting to shim 754383308bd0769348a3fdb99514d2b20404a10063e69564fc766a33e1b52de0" address="unix:///run/containerd/s/ed1a487c729406d323e0aa007ded0eae2f5d51710d61fe3f82b70097f3c47806" protocol=ttrpc version=3 Jan 14 01:21:07.313675 systemd[1]: Started cri-containerd-754383308bd0769348a3fdb99514d2b20404a10063e69564fc766a33e1b52de0.scope - libcontainer container 754383308bd0769348a3fdb99514d2b20404a10063e69564fc766a33e1b52de0. Jan 14 01:21:07.322000 audit: BPF prog-id=185 op=LOAD Jan 14 01:21:07.323000 audit: BPF prog-id=186 op=LOAD Jan 14 01:21:07.323000 audit[4605]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4482 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:07.323000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735343338333330386264303736393334386133666462393935313464 Jan 14 01:21:07.323000 audit: BPF prog-id=186 op=UNLOAD Jan 14 01:21:07.323000 audit[4605]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4482 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:07.323000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735343338333330386264303736393334386133666462393935313464 Jan 14 01:21:07.323000 audit: BPF prog-id=187 op=LOAD Jan 14 01:21:07.323000 audit[4605]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4482 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:07.323000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735343338333330386264303736393334386133666462393935313464 Jan 14 01:21:07.323000 audit: BPF prog-id=188 op=LOAD Jan 14 01:21:07.323000 audit[4605]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4482 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:07.323000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735343338333330386264303736393334386133666462393935313464 Jan 14 01:21:07.323000 audit: BPF prog-id=188 op=UNLOAD Jan 14 01:21:07.323000 audit[4605]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4482 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:07.323000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735343338333330386264303736393334386133666462393935313464 Jan 14 01:21:07.323000 audit: BPF prog-id=187 op=UNLOAD Jan 14 01:21:07.323000 audit[4605]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4482 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:07.323000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735343338333330386264303736393334386133666462393935313464 Jan 14 01:21:07.323000 audit: BPF prog-id=189 op=LOAD Jan 14 01:21:07.323000 audit[4605]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4482 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:07.323000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735343338333330386264303736393334386133666462393935313464 Jan 14 01:21:07.361277 containerd[2507]: time="2026-01-14T01:21:07.361244139Z" level=info msg="StartContainer for \"754383308bd0769348a3fdb99514d2b20404a10063e69564fc766a33e1b52de0\" returns successfully" Jan 14 01:21:07.739228 kubelet[4019]: E0114 01:21:07.739168 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c7dnf" podUID="19451c9d-d740-439e-ba98-ce86a4dce532" Jan 14 01:21:07.862288 kubelet[4019]: E0114 01:21:07.862263 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:07.862288 kubelet[4019]: W0114 01:21:07.862280 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:07.862460 kubelet[4019]: E0114 01:21:07.862295 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:07.862460 kubelet[4019]: E0114 01:21:07.862397 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:07.862460 kubelet[4019]: W0114 01:21:07.862402 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:07.862460 kubelet[4019]: E0114 01:21:07.862408 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:07.862740 kubelet[4019]: E0114 01:21:07.862531 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:07.862740 kubelet[4019]: W0114 01:21:07.862536 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:07.862740 kubelet[4019]: E0114 01:21:07.862541 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:07.862740 kubelet[4019]: E0114 01:21:07.862672 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:07.862740 kubelet[4019]: W0114 01:21:07.862677 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:07.862740 kubelet[4019]: E0114 01:21:07.862684 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:07.862933 kubelet[4019]: E0114 01:21:07.862778 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:07.862933 kubelet[4019]: W0114 01:21:07.862783 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:07.862933 kubelet[4019]: E0114 01:21:07.862788 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:07.862933 kubelet[4019]: E0114 01:21:07.862888 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:07.862933 kubelet[4019]: W0114 01:21:07.862896 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:07.862933 kubelet[4019]: E0114 01:21:07.862905 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:07.863087 kubelet[4019]: E0114 01:21:07.862995 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:07.863087 kubelet[4019]: W0114 01:21:07.863000 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:07.863087 kubelet[4019]: E0114 01:21:07.863005 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:07.863164 kubelet[4019]: E0114 01:21:07.863091 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:07.863164 kubelet[4019]: W0114 01:21:07.863095 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:07.863164 kubelet[4019]: E0114 01:21:07.863102 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:07.863241 kubelet[4019]: E0114 01:21:07.863187 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:07.863241 kubelet[4019]: W0114 01:21:07.863212 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:07.863241 kubelet[4019]: E0114 01:21:07.863218 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:07.863324 kubelet[4019]: E0114 01:21:07.863298 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:07.863324 kubelet[4019]: W0114 01:21:07.863301 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:07.863324 kubelet[4019]: E0114 01:21:07.863306 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:07.863399 kubelet[4019]: E0114 01:21:07.863381 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:07.863399 kubelet[4019]: W0114 01:21:07.863385 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:07.863399 kubelet[4019]: E0114 01:21:07.863390 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:07.863480 kubelet[4019]: E0114 01:21:07.863466 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:07.863480 kubelet[4019]: W0114 01:21:07.863471 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:07.863480 kubelet[4019]: E0114 01:21:07.863476 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:07.863586 kubelet[4019]: E0114 01:21:07.863579 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:07.863586 kubelet[4019]: W0114 01:21:07.863584 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:07.863633 kubelet[4019]: E0114 01:21:07.863590 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:07.863674 kubelet[4019]: E0114 01:21:07.863666 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:07.863674 kubelet[4019]: W0114 01:21:07.863671 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:07.863735 kubelet[4019]: E0114 01:21:07.863675 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:07.863759 kubelet[4019]: E0114 01:21:07.863750 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:07.863759 kubelet[4019]: W0114 01:21:07.863754 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:07.863810 kubelet[4019]: E0114 01:21:07.863759 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:07.876527 kubelet[4019]: E0114 01:21:07.876397 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:07.876527 kubelet[4019]: W0114 01:21:07.876415 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:07.876527 kubelet[4019]: E0114 01:21:07.876440 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:07.876658 kubelet[4019]: E0114 01:21:07.876620 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:07.876658 kubelet[4019]: W0114 01:21:07.876626 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:07.876658 kubelet[4019]: E0114 01:21:07.876635 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:07.876904 kubelet[4019]: E0114 01:21:07.876816 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:07.876904 kubelet[4019]: W0114 01:21:07.876823 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:07.876904 kubelet[4019]: E0114 01:21:07.876830 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:07.877303 kubelet[4019]: E0114 01:21:07.877022 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:07.877303 kubelet[4019]: W0114 01:21:07.877029 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:07.877303 kubelet[4019]: E0114 01:21:07.877041 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:07.877303 kubelet[4019]: E0114 01:21:07.877177 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:07.877303 kubelet[4019]: W0114 01:21:07.877183 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:07.877303 kubelet[4019]: E0114 01:21:07.877190 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:07.877303 kubelet[4019]: E0114 01:21:07.877289 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:07.877303 kubelet[4019]: W0114 01:21:07.877295 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:07.877303 kubelet[4019]: E0114 01:21:07.877301 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:07.877511 kubelet[4019]: E0114 01:21:07.877435 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:07.877511 kubelet[4019]: W0114 01:21:07.877440 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:07.877511 kubelet[4019]: E0114 01:21:07.877447 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:07.877581 kubelet[4019]: E0114 01:21:07.877568 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:07.877581 kubelet[4019]: W0114 01:21:07.877574 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:07.877625 kubelet[4019]: E0114 01:21:07.877580 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:07.877934 kubelet[4019]: E0114 01:21:07.877691 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:07.877934 kubelet[4019]: W0114 01:21:07.877705 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:07.877934 kubelet[4019]: E0114 01:21:07.877710 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:07.877934 kubelet[4019]: E0114 01:21:07.877887 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:07.877934 kubelet[4019]: W0114 01:21:07.877894 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:07.877934 kubelet[4019]: E0114 01:21:07.877901 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:07.878149 kubelet[4019]: E0114 01:21:07.878131 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:07.878149 kubelet[4019]: W0114 01:21:07.878143 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:07.878195 kubelet[4019]: E0114 01:21:07.878150 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:07.878257 kubelet[4019]: E0114 01:21:07.878247 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:07.878257 kubelet[4019]: W0114 01:21:07.878254 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:07.878305 kubelet[4019]: E0114 01:21:07.878260 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:07.878343 kubelet[4019]: E0114 01:21:07.878334 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:07.878343 kubelet[4019]: W0114 01:21:07.878341 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:07.878386 kubelet[4019]: E0114 01:21:07.878346 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:07.878524 kubelet[4019]: E0114 01:21:07.878481 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:07.878524 kubelet[4019]: W0114 01:21:07.878501 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:07.878524 kubelet[4019]: E0114 01:21:07.878508 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:07.878690 kubelet[4019]: E0114 01:21:07.878681 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:07.878690 kubelet[4019]: W0114 01:21:07.878689 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:07.878755 kubelet[4019]: E0114 01:21:07.878695 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:07.879049 kubelet[4019]: E0114 01:21:07.879036 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:07.879049 kubelet[4019]: W0114 01:21:07.879045 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:07.879116 kubelet[4019]: E0114 01:21:07.879052 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:07.879213 kubelet[4019]: E0114 01:21:07.879187 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:07.879213 kubelet[4019]: W0114 01:21:07.879209 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:07.879263 kubelet[4019]: E0114 01:21:07.879219 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:07.879331 kubelet[4019]: E0114 01:21:07.879319 4019 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:21:07.879331 kubelet[4019]: W0114 01:21:07.879325 4019 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:21:07.879367 kubelet[4019]: E0114 01:21:07.879331 4019 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:21:08.687190 containerd[2507]: time="2026-01-14T01:21:08.687145360Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:21:08.690511 containerd[2507]: time="2026-01-14T01:21:08.690414301Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 14 01:21:08.693756 containerd[2507]: time="2026-01-14T01:21:08.693712328Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:21:08.698531 containerd[2507]: time="2026-01-14T01:21:08.698492662Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:21:08.698891 containerd[2507]: time="2026-01-14T01:21:08.698864791Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.489792306s" Jan 14 01:21:08.698946 containerd[2507]: time="2026-01-14T01:21:08.698898138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 14 01:21:08.708963 containerd[2507]: time="2026-01-14T01:21:08.708571579Z" level=info msg="CreateContainer within sandbox \"29cd7e744eef5d4066ad63cf24ec0a64eb81061616de4d3ea22c90adf5ed9cab\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 14 01:21:08.732099 containerd[2507]: time="2026-01-14T01:21:08.729114354Z" level=info msg="Container a731857522d0aa84a0e956da80757bac224c030e5b0f9f8592c622364a5eee4c: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:21:08.752138 containerd[2507]: time="2026-01-14T01:21:08.752115294Z" level=info msg="CreateContainer within sandbox \"29cd7e744eef5d4066ad63cf24ec0a64eb81061616de4d3ea22c90adf5ed9cab\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a731857522d0aa84a0e956da80757bac224c030e5b0f9f8592c622364a5eee4c\"" Jan 14 01:21:08.753079 containerd[2507]: time="2026-01-14T01:21:08.752676985Z" level=info msg="StartContainer for \"a731857522d0aa84a0e956da80757bac224c030e5b0f9f8592c622364a5eee4c\"" Jan 14 01:21:08.754397 containerd[2507]: time="2026-01-14T01:21:08.754374333Z" level=info msg="connecting to shim a731857522d0aa84a0e956da80757bac224c030e5b0f9f8592c622364a5eee4c" address="unix:///run/containerd/s/b2637069f4721836b63a827585d0a9d53da69f2839dac06854a8459904e9c04a" protocol=ttrpc version=3 Jan 14 01:21:08.771817 systemd[1]: Started cri-containerd-a731857522d0aa84a0e956da80757bac224c030e5b0f9f8592c622364a5eee4c.scope - libcontainer container a731857522d0aa84a0e956da80757bac224c030e5b0f9f8592c622364a5eee4c. Jan 14 01:21:08.807000 audit: BPF prog-id=190 op=LOAD Jan 14 01:21:08.807000 audit[4680]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4556 pid=4680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:08.807000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137333138353735323264306161383461306539353664613830373537 Jan 14 01:21:08.807000 audit: BPF prog-id=191 op=LOAD Jan 14 01:21:08.807000 audit[4680]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4556 pid=4680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:08.807000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137333138353735323264306161383461306539353664613830373537 Jan 14 01:21:08.807000 audit: BPF prog-id=191 op=UNLOAD Jan 14 01:21:08.807000 audit[4680]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4556 pid=4680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:08.807000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137333138353735323264306161383461306539353664613830373537 Jan 14 01:21:08.807000 audit: BPF prog-id=190 op=UNLOAD Jan 14 01:21:08.807000 audit[4680]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4556 pid=4680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:08.807000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137333138353735323264306161383461306539353664613830373537 Jan 14 01:21:08.808000 audit: BPF prog-id=192 op=LOAD Jan 14 01:21:08.808000 audit[4680]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4556 pid=4680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:08.808000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137333138353735323264306161383461306539353664613830373537 Jan 14 01:21:08.810548 kubelet[4019]: I0114 01:21:08.809636 4019 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 01:21:08.829272 containerd[2507]: time="2026-01-14T01:21:08.829247534Z" level=info msg="StartContainer for \"a731857522d0aa84a0e956da80757bac224c030e5b0f9f8592c622364a5eee4c\" returns successfully" Jan 14 01:21:08.835392 systemd[1]: cri-containerd-a731857522d0aa84a0e956da80757bac224c030e5b0f9f8592c622364a5eee4c.scope: Deactivated successfully. Jan 14 01:21:08.837713 containerd[2507]: time="2026-01-14T01:21:08.837670021Z" level=info msg="received container exit event container_id:\"a731857522d0aa84a0e956da80757bac224c030e5b0f9f8592c622364a5eee4c\" id:\"a731857522d0aa84a0e956da80757bac224c030e5b0f9f8592c622364a5eee4c\" pid:4692 exited_at:{seconds:1768353668 nanos:837294941}" Jan 14 01:21:08.837000 audit: BPF prog-id=192 op=UNLOAD Jan 14 01:21:08.857372 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a731857522d0aa84a0e956da80757bac224c030e5b0f9f8592c622364a5eee4c-rootfs.mount: Deactivated successfully. Jan 14 01:21:09.738467 kubelet[4019]: E0114 01:21:09.738425 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c7dnf" podUID="19451c9d-d740-439e-ba98-ce86a4dce532" Jan 14 01:21:09.833617 kubelet[4019]: I0114 01:21:09.829451 4019 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-74cb7cd7bd-t4s2r" podStartSLOduration=4.056342699 podStartE2EDuration="6.829436528s" podCreationTimestamp="2026-01-14 01:21:03 +0000 UTC" firstStartedPulling="2026-01-14 01:21:04.435852895 +0000 UTC m=+19.790893267" lastFinishedPulling="2026-01-14 01:21:07.208946721 +0000 UTC m=+22.563987096" observedRunningTime="2026-01-14 01:21:07.823473739 +0000 UTC m=+23.178514110" watchObservedRunningTime="2026-01-14 01:21:09.829436528 +0000 UTC m=+25.184476905" Jan 14 01:21:10.817371 containerd[2507]: time="2026-01-14T01:21:10.817263604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 14 01:21:11.738246 kubelet[4019]: E0114 01:21:11.738189 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c7dnf" podUID="19451c9d-d740-439e-ba98-ce86a4dce532" Jan 14 01:21:13.738911 kubelet[4019]: E0114 01:21:13.738870 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c7dnf" podUID="19451c9d-d740-439e-ba98-ce86a4dce532" Jan 14 01:21:14.257187 containerd[2507]: time="2026-01-14T01:21:14.257149117Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:21:14.259652 containerd[2507]: time="2026-01-14T01:21:14.259624598Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 14 01:21:14.264031 containerd[2507]: time="2026-01-14T01:21:14.263628774Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:21:14.267475 containerd[2507]: time="2026-01-14T01:21:14.267408887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:21:14.269026 containerd[2507]: time="2026-01-14T01:21:14.268931080Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.451618055s" Jan 14 01:21:14.269026 containerd[2507]: time="2026-01-14T01:21:14.268967902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 14 01:21:14.278868 containerd[2507]: time="2026-01-14T01:21:14.278840760Z" level=info msg="CreateContainer within sandbox \"29cd7e744eef5d4066ad63cf24ec0a64eb81061616de4d3ea22c90adf5ed9cab\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 14 01:21:14.297501 containerd[2507]: time="2026-01-14T01:21:14.297464583Z" level=info msg="Container 20cc34d7ed36f53f4c4bc1a60515eef93f1913b5e88d3b3042da415350fd1ca0: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:21:14.311370 containerd[2507]: time="2026-01-14T01:21:14.311344601Z" level=info msg="CreateContainer within sandbox \"29cd7e744eef5d4066ad63cf24ec0a64eb81061616de4d3ea22c90adf5ed9cab\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"20cc34d7ed36f53f4c4bc1a60515eef93f1913b5e88d3b3042da415350fd1ca0\"" Jan 14 01:21:14.311794 containerd[2507]: time="2026-01-14T01:21:14.311712878Z" level=info msg="StartContainer for \"20cc34d7ed36f53f4c4bc1a60515eef93f1913b5e88d3b3042da415350fd1ca0\"" Jan 14 01:21:14.313142 containerd[2507]: time="2026-01-14T01:21:14.313105167Z" level=info msg="connecting to shim 20cc34d7ed36f53f4c4bc1a60515eef93f1913b5e88d3b3042da415350fd1ca0" address="unix:///run/containerd/s/b2637069f4721836b63a827585d0a9d53da69f2839dac06854a8459904e9c04a" protocol=ttrpc version=3 Jan 14 01:21:14.337646 systemd[1]: Started cri-containerd-20cc34d7ed36f53f4c4bc1a60515eef93f1913b5e88d3b3042da415350fd1ca0.scope - libcontainer container 20cc34d7ed36f53f4c4bc1a60515eef93f1913b5e88d3b3042da415350fd1ca0. Jan 14 01:21:14.373000 audit: BPF prog-id=193 op=LOAD Jan 14 01:21:14.376338 kernel: kauditd_printk_skb: 56 callbacks suppressed Jan 14 01:21:14.376387 kernel: audit: type=1334 audit(1768353674.373:596): prog-id=193 op=LOAD Jan 14 01:21:14.373000 audit[4737]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4556 pid=4737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:14.383881 kernel: audit: type=1300 audit(1768353674.373:596): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4556 pid=4737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:14.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230636333346437656433366635336634633462633161363035313565 Jan 14 01:21:14.388981 kernel: audit: type=1327 audit(1768353674.373:596): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230636333346437656433366635336634633462633161363035313565 Jan 14 01:21:14.390471 kernel: audit: type=1334 audit(1768353674.373:597): prog-id=194 op=LOAD Jan 14 01:21:14.373000 audit: BPF prog-id=194 op=LOAD Jan 14 01:21:14.373000 audit[4737]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4556 pid=4737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:14.399750 kernel: audit: type=1300 audit(1768353674.373:597): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4556 pid=4737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:14.399824 kernel: audit: type=1327 audit(1768353674.373:597): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230636333346437656433366635336634633462633161363035313565 Jan 14 01:21:14.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230636333346437656433366635336634633462633161363035313565 Jan 14 01:21:14.401186 kernel: audit: type=1334 audit(1768353674.373:598): prog-id=194 op=UNLOAD Jan 14 01:21:14.373000 audit: BPF prog-id=194 op=UNLOAD Jan 14 01:21:14.406077 kernel: audit: type=1300 audit(1768353674.373:598): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4556 pid=4737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:14.373000 audit[4737]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4556 pid=4737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:14.410436 kernel: audit: type=1327 audit(1768353674.373:598): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230636333346437656433366635336634633462633161363035313565 Jan 14 01:21:14.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230636333346437656433366635336634633462633161363035313565 Jan 14 01:21:14.373000 audit: BPF prog-id=193 op=UNLOAD Jan 14 01:21:14.373000 audit[4737]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4556 pid=4737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:14.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230636333346437656433366635336634633462633161363035313565 Jan 14 01:21:14.373000 audit: BPF prog-id=195 op=LOAD Jan 14 01:21:14.373000 audit[4737]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4556 pid=4737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:14.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230636333346437656433366635336634633462633161363035313565 Jan 14 01:21:14.412506 kernel: audit: type=1334 audit(1768353674.373:599): prog-id=193 op=UNLOAD Jan 14 01:21:14.429504 containerd[2507]: time="2026-01-14T01:21:14.428545101Z" level=info msg="StartContainer for \"20cc34d7ed36f53f4c4bc1a60515eef93f1913b5e88d3b3042da415350fd1ca0\" returns successfully" Jan 14 01:21:15.546907 containerd[2507]: time="2026-01-14T01:21:15.546864105Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 01:21:15.548651 systemd[1]: cri-containerd-20cc34d7ed36f53f4c4bc1a60515eef93f1913b5e88d3b3042da415350fd1ca0.scope: Deactivated successfully. Jan 14 01:21:15.549212 systemd[1]: cri-containerd-20cc34d7ed36f53f4c4bc1a60515eef93f1913b5e88d3b3042da415350fd1ca0.scope: Consumed 383ms CPU time, 192.4M memory peak, 171.3M written to disk. Jan 14 01:21:15.550271 containerd[2507]: time="2026-01-14T01:21:15.550240064Z" level=info msg="received container exit event container_id:\"20cc34d7ed36f53f4c4bc1a60515eef93f1913b5e88d3b3042da415350fd1ca0\" id:\"20cc34d7ed36f53f4c4bc1a60515eef93f1913b5e88d3b3042da415350fd1ca0\" pid:4751 exited_at:{seconds:1768353675 nanos:550036943}" Jan 14 01:21:15.551000 audit: BPF prog-id=195 op=UNLOAD Jan 14 01:21:15.571460 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20cc34d7ed36f53f4c4bc1a60515eef93f1913b5e88d3b3042da415350fd1ca0-rootfs.mount: Deactivated successfully. Jan 14 01:21:15.586794 kubelet[4019]: I0114 01:21:15.586770 4019 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 14 01:21:15.876220 systemd[1]: Created slice kubepods-burstable-pod105ea93b_acaa_40b0_85df_5f33aa1485e5.slice - libcontainer container kubepods-burstable-pod105ea93b_acaa_40b0_85df_5f33aa1485e5.slice. Jan 14 01:21:15.978537 kubelet[4019]: I0114 01:21:15.929439 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/105ea93b-acaa-40b0-85df-5f33aa1485e5-config-volume\") pod \"coredns-66bc5c9577-nmnvc\" (UID: \"105ea93b-acaa-40b0-85df-5f33aa1485e5\") " pod="kube-system/coredns-66bc5c9577-nmnvc" Jan 14 01:21:15.978537 kubelet[4019]: I0114 01:21:15.929509 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fkrb\" (UniqueName: \"kubernetes.io/projected/105ea93b-acaa-40b0-85df-5f33aa1485e5-kube-api-access-2fkrb\") pod \"coredns-66bc5c9577-nmnvc\" (UID: \"105ea93b-acaa-40b0-85df-5f33aa1485e5\") " pod="kube-system/coredns-66bc5c9577-nmnvc" Jan 14 01:21:15.880553 systemd[1]: Created slice kubepods-besteffort-pod19451c9d_d740_439e_ba98_ce86a4dce532.slice - libcontainer container kubepods-besteffort-pod19451c9d_d740_439e_ba98_ce86a4dce532.slice. Jan 14 01:21:15.990640 systemd[1]: Created slice kubepods-besteffort-podec043dd0_d5b9_4795_bda8_379dd9ed27d6.slice - libcontainer container kubepods-besteffort-podec043dd0_d5b9_4795_bda8_379dd9ed27d6.slice. Jan 14 01:21:16.030574 kubelet[4019]: I0114 01:21:16.030546 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec043dd0-d5b9-4795-bda8-379dd9ed27d6-tigera-ca-bundle\") pod \"calico-kube-controllers-7f76fbbdcb-fzd7d\" (UID: \"ec043dd0-d5b9-4795-bda8-379dd9ed27d6\") " pod="calico-system/calico-kube-controllers-7f76fbbdcb-fzd7d" Jan 14 01:21:16.267344 kubelet[4019]: I0114 01:21:16.030595 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkn49\" (UniqueName: \"kubernetes.io/projected/ec043dd0-d5b9-4795-bda8-379dd9ed27d6-kube-api-access-xkn49\") pod \"calico-kube-controllers-7f76fbbdcb-fzd7d\" (UID: \"ec043dd0-d5b9-4795-bda8-379dd9ed27d6\") " pod="calico-system/calico-kube-controllers-7f76fbbdcb-fzd7d" Jan 14 01:21:16.461069 containerd[2507]: time="2026-01-14T01:21:16.460897155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7dnf,Uid:19451c9d-d740-439e-ba98-ce86a4dce532,Namespace:calico-system,Attempt:0,}" Jan 14 01:21:16.475388 systemd[1]: Created slice kubepods-besteffort-pod81ae1783_805d_45cb_a9d3_21a22f1883e1.slice - libcontainer container kubepods-besteffort-pod81ae1783_805d_45cb_a9d3_21a22f1883e1.slice. Jan 14 01:21:16.477379 containerd[2507]: time="2026-01-14T01:21:16.477342031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nmnvc,Uid:105ea93b-acaa-40b0-85df-5f33aa1485e5,Namespace:kube-system,Attempt:0,}" Jan 14 01:21:16.480299 containerd[2507]: time="2026-01-14T01:21:16.479629137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f76fbbdcb-fzd7d,Uid:ec043dd0-d5b9-4795-bda8-379dd9ed27d6,Namespace:calico-system,Attempt:0,}" Jan 14 01:21:16.495646 systemd[1]: Created slice kubepods-besteffort-pod2bd1a43b_e98f_4a1f_8c59_f0c6872188ff.slice - libcontainer container kubepods-besteffort-pod2bd1a43b_e98f_4a1f_8c59_f0c6872188ff.slice. Jan 14 01:21:16.511276 systemd[1]: Created slice kubepods-besteffort-pod4c98e701_5dc1_42d2_b8b2_315dbbe213e6.slice - libcontainer container kubepods-besteffort-pod4c98e701_5dc1_42d2_b8b2_315dbbe213e6.slice. Jan 14 01:21:16.522457 systemd[1]: Created slice kubepods-besteffort-pod512e673b_5c39_45b5_b82e_4a4fa2ad3be4.slice - libcontainer container kubepods-besteffort-pod512e673b_5c39_45b5_b82e_4a4fa2ad3be4.slice. Jan 14 01:21:16.534410 kubelet[4019]: I0114 01:21:16.534035 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/512e673b-5c39-45b5-b82e-4a4fa2ad3be4-whisker-ca-bundle\") pod \"whisker-6dd999c47-6xnws\" (UID: \"512e673b-5c39-45b5-b82e-4a4fa2ad3be4\") " pod="calico-system/whisker-6dd999c47-6xnws" Jan 14 01:21:16.534410 kubelet[4019]: I0114 01:21:16.534071 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnqhs\" (UniqueName: \"kubernetes.io/projected/12820403-9657-4768-aa8a-a08f227bfebe-kube-api-access-nnqhs\") pod \"coredns-66bc5c9577-ndtsm\" (UID: \"12820403-9657-4768-aa8a-a08f227bfebe\") " pod="kube-system/coredns-66bc5c9577-ndtsm" Jan 14 01:21:16.534410 kubelet[4019]: I0114 01:21:16.534092 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/512e673b-5c39-45b5-b82e-4a4fa2ad3be4-whisker-backend-key-pair\") pod \"whisker-6dd999c47-6xnws\" (UID: \"512e673b-5c39-45b5-b82e-4a4fa2ad3be4\") " pod="calico-system/whisker-6dd999c47-6xnws" Jan 14 01:21:16.534410 kubelet[4019]: I0114 01:21:16.534109 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhz5d\" (UniqueName: \"kubernetes.io/projected/4c98e701-5dc1-42d2-b8b2-315dbbe213e6-kube-api-access-vhz5d\") pod \"calico-apiserver-dc47777bb-hmsh5\" (UID: \"4c98e701-5dc1-42d2-b8b2-315dbbe213e6\") " pod="calico-apiserver/calico-apiserver-dc47777bb-hmsh5" Jan 14 01:21:16.534410 kubelet[4019]: I0114 01:21:16.534126 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/2bd1a43b-e98f-4a1f-8c59-f0c6872188ff-goldmane-key-pair\") pod \"goldmane-7c778bb748-q7p7k\" (UID: \"2bd1a43b-e98f-4a1f-8c59-f0c6872188ff\") " pod="calico-system/goldmane-7c778bb748-q7p7k" Jan 14 01:21:16.534276 systemd[1]: Created slice kubepods-burstable-pod12820403_9657_4768_aa8a_a08f227bfebe.slice - libcontainer container kubepods-burstable-pod12820403_9657_4768_aa8a_a08f227bfebe.slice. Jan 14 01:21:16.534668 kubelet[4019]: I0114 01:21:16.534144 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ffbw\" (UniqueName: \"kubernetes.io/projected/2bd1a43b-e98f-4a1f-8c59-f0c6872188ff-kube-api-access-7ffbw\") pod \"goldmane-7c778bb748-q7p7k\" (UID: \"2bd1a43b-e98f-4a1f-8c59-f0c6872188ff\") " pod="calico-system/goldmane-7c778bb748-q7p7k" Jan 14 01:21:16.534668 kubelet[4019]: I0114 01:21:16.534161 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/81ae1783-805d-45cb-a9d3-21a22f1883e1-calico-apiserver-certs\") pod \"calico-apiserver-dc47777bb-hzl5d\" (UID: \"81ae1783-805d-45cb-a9d3-21a22f1883e1\") " pod="calico-apiserver/calico-apiserver-dc47777bb-hzl5d" Jan 14 01:21:16.534668 kubelet[4019]: I0114 01:21:16.534185 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2bd1a43b-e98f-4a1f-8c59-f0c6872188ff-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-q7p7k\" (UID: \"2bd1a43b-e98f-4a1f-8c59-f0c6872188ff\") " pod="calico-system/goldmane-7c778bb748-q7p7k" Jan 14 01:21:16.534668 kubelet[4019]: I0114 01:21:16.534203 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12820403-9657-4768-aa8a-a08f227bfebe-config-volume\") pod \"coredns-66bc5c9577-ndtsm\" (UID: \"12820403-9657-4768-aa8a-a08f227bfebe\") " pod="kube-system/coredns-66bc5c9577-ndtsm" Jan 14 01:21:16.534668 kubelet[4019]: I0114 01:21:16.534223 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bd1a43b-e98f-4a1f-8c59-f0c6872188ff-config\") pod \"goldmane-7c778bb748-q7p7k\" (UID: \"2bd1a43b-e98f-4a1f-8c59-f0c6872188ff\") " pod="calico-system/goldmane-7c778bb748-q7p7k" Jan 14 01:21:16.534789 kubelet[4019]: I0114 01:21:16.534240 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9ssv\" (UniqueName: \"kubernetes.io/projected/512e673b-5c39-45b5-b82e-4a4fa2ad3be4-kube-api-access-p9ssv\") pod \"whisker-6dd999c47-6xnws\" (UID: \"512e673b-5c39-45b5-b82e-4a4fa2ad3be4\") " pod="calico-system/whisker-6dd999c47-6xnws" Jan 14 01:21:16.537635 kubelet[4019]: I0114 01:21:16.537611 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6t5w\" (UniqueName: \"kubernetes.io/projected/81ae1783-805d-45cb-a9d3-21a22f1883e1-kube-api-access-q6t5w\") pod \"calico-apiserver-dc47777bb-hzl5d\" (UID: \"81ae1783-805d-45cb-a9d3-21a22f1883e1\") " pod="calico-apiserver/calico-apiserver-dc47777bb-hzl5d" Jan 14 01:21:16.538607 kubelet[4019]: I0114 01:21:16.537644 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4c98e701-5dc1-42d2-b8b2-315dbbe213e6-calico-apiserver-certs\") pod \"calico-apiserver-dc47777bb-hmsh5\" (UID: \"4c98e701-5dc1-42d2-b8b2-315dbbe213e6\") " pod="calico-apiserver/calico-apiserver-dc47777bb-hmsh5" Jan 14 01:21:16.607189 containerd[2507]: time="2026-01-14T01:21:16.605577569Z" level=error msg="Failed to destroy network for sandbox \"7c31c7d3b23e9eb88c65bdde5f11344240bf4be42024e1b176d26d69dcc8dbb1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:16.609059 systemd[1]: run-netns-cni\x2d7d65f65c\x2da792\x2d8a48\x2ddbfe\x2d41752675a80d.mount: Deactivated successfully. Jan 14 01:21:16.615325 containerd[2507]: time="2026-01-14T01:21:16.615229741Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7dnf,Uid:19451c9d-d740-439e-ba98-ce86a4dce532,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c31c7d3b23e9eb88c65bdde5f11344240bf4be42024e1b176d26d69dcc8dbb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:16.615567 kubelet[4019]: E0114 01:21:16.615425 4019 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c31c7d3b23e9eb88c65bdde5f11344240bf4be42024e1b176d26d69dcc8dbb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:16.615784 kubelet[4019]: E0114 01:21:16.615593 4019 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c31c7d3b23e9eb88c65bdde5f11344240bf4be42024e1b176d26d69dcc8dbb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c7dnf" Jan 14 01:21:16.615784 kubelet[4019]: E0114 01:21:16.615630 4019 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c31c7d3b23e9eb88c65bdde5f11344240bf4be42024e1b176d26d69dcc8dbb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c7dnf" Jan 14 01:21:16.615784 kubelet[4019]: E0114 01:21:16.615728 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-c7dnf_calico-system(19451c9d-d740-439e-ba98-ce86a4dce532)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-c7dnf_calico-system(19451c9d-d740-439e-ba98-ce86a4dce532)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c31c7d3b23e9eb88c65bdde5f11344240bf4be42024e1b176d26d69dcc8dbb1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c7dnf" podUID="19451c9d-d740-439e-ba98-ce86a4dce532" Jan 14 01:21:16.620787 containerd[2507]: time="2026-01-14T01:21:16.620755582Z" level=error msg="Failed to destroy network for sandbox \"4df435f93bf0a020b76cd505a4beeb419c08f15185c375a92b27122d3d574f95\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:16.624373 systemd[1]: run-netns-cni\x2d4082680f\x2d93b0\x2d9ccd\x2ddaca\x2dddb2c4b78016.mount: Deactivated successfully. Jan 14 01:21:16.624675 containerd[2507]: time="2026-01-14T01:21:16.624579023Z" level=error msg="Failed to destroy network for sandbox \"b8902e9f2abbae84b2550947d8f698d6bc553462fc9eb387441d2db7691e215f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:16.627141 systemd[1]: run-netns-cni\x2d71ae30a8\x2daeee\x2dab42\x2d26bc\x2dc86754bc1e7a.mount: Deactivated successfully. Jan 14 01:21:16.630204 containerd[2507]: time="2026-01-14T01:21:16.630122191Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f76fbbdcb-fzd7d,Uid:ec043dd0-d5b9-4795-bda8-379dd9ed27d6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4df435f93bf0a020b76cd505a4beeb419c08f15185c375a92b27122d3d574f95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:16.630330 kubelet[4019]: E0114 01:21:16.630285 4019 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4df435f93bf0a020b76cd505a4beeb419c08f15185c375a92b27122d3d574f95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:16.630330 kubelet[4019]: E0114 01:21:16.630322 4019 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4df435f93bf0a020b76cd505a4beeb419c08f15185c375a92b27122d3d574f95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f76fbbdcb-fzd7d" Jan 14 01:21:16.630411 kubelet[4019]: E0114 01:21:16.630339 4019 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4df435f93bf0a020b76cd505a4beeb419c08f15185c375a92b27122d3d574f95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f76fbbdcb-fzd7d" Jan 14 01:21:16.630411 kubelet[4019]: E0114 01:21:16.630387 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f76fbbdcb-fzd7d_calico-system(ec043dd0-d5b9-4795-bda8-379dd9ed27d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f76fbbdcb-fzd7d_calico-system(ec043dd0-d5b9-4795-bda8-379dd9ed27d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4df435f93bf0a020b76cd505a4beeb419c08f15185c375a92b27122d3d574f95\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f76fbbdcb-fzd7d" podUID="ec043dd0-d5b9-4795-bda8-379dd9ed27d6" Jan 14 01:21:16.647862 containerd[2507]: time="2026-01-14T01:21:16.647825173Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nmnvc,Uid:105ea93b-acaa-40b0-85df-5f33aa1485e5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8902e9f2abbae84b2550947d8f698d6bc553462fc9eb387441d2db7691e215f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:16.648402 kubelet[4019]: E0114 01:21:16.648310 4019 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8902e9f2abbae84b2550947d8f698d6bc553462fc9eb387441d2db7691e215f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:16.648402 kubelet[4019]: E0114 01:21:16.648350 4019 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8902e9f2abbae84b2550947d8f698d6bc553462fc9eb387441d2db7691e215f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-nmnvc" Jan 14 01:21:16.648402 kubelet[4019]: E0114 01:21:16.648366 4019 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8902e9f2abbae84b2550947d8f698d6bc553462fc9eb387441d2db7691e215f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-nmnvc" Jan 14 01:21:16.648625 kubelet[4019]: E0114 01:21:16.648414 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-nmnvc_kube-system(105ea93b-acaa-40b0-85df-5f33aa1485e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-nmnvc_kube-system(105ea93b-acaa-40b0-85df-5f33aa1485e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8902e9f2abbae84b2550947d8f698d6bc553462fc9eb387441d2db7691e215f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-nmnvc" podUID="105ea93b-acaa-40b0-85df-5f33aa1485e5" Jan 14 01:21:16.783863 containerd[2507]: time="2026-01-14T01:21:16.783788119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dc47777bb-hzl5d,Uid:81ae1783-805d-45cb-a9d3-21a22f1883e1,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:21:16.813605 containerd[2507]: time="2026-01-14T01:21:16.813518544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-q7p7k,Uid:2bd1a43b-e98f-4a1f-8c59-f0c6872188ff,Namespace:calico-system,Attempt:0,}" Jan 14 01:21:16.821978 containerd[2507]: time="2026-01-14T01:21:16.821939503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dc47777bb-hmsh5,Uid:4c98e701-5dc1-42d2-b8b2-315dbbe213e6,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:21:16.841678 containerd[2507]: time="2026-01-14T01:21:16.841557850Z" level=error msg="Failed to destroy network for sandbox \"b02ea7cd461c666921e19501ee0e03e72e3b8c605814bda163785827a0e28b61\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:16.848935 containerd[2507]: time="2026-01-14T01:21:16.848725088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dd999c47-6xnws,Uid:512e673b-5c39-45b5-b82e-4a4fa2ad3be4,Namespace:calico-system,Attempt:0,}" Jan 14 01:21:16.850303 containerd[2507]: time="2026-01-14T01:21:16.850282372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ndtsm,Uid:12820403-9657-4768-aa8a-a08f227bfebe,Namespace:kube-system,Attempt:0,}" Jan 14 01:21:16.855091 containerd[2507]: time="2026-01-14T01:21:16.854996181Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dc47777bb-hzl5d,Uid:81ae1783-805d-45cb-a9d3-21a22f1883e1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b02ea7cd461c666921e19501ee0e03e72e3b8c605814bda163785827a0e28b61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:16.855204 kubelet[4019]: E0114 01:21:16.855160 4019 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b02ea7cd461c666921e19501ee0e03e72e3b8c605814bda163785827a0e28b61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:16.855257 kubelet[4019]: E0114 01:21:16.855201 4019 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b02ea7cd461c666921e19501ee0e03e72e3b8c605814bda163785827a0e28b61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dc47777bb-hzl5d" Jan 14 01:21:16.855257 kubelet[4019]: E0114 01:21:16.855219 4019 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b02ea7cd461c666921e19501ee0e03e72e3b8c605814bda163785827a0e28b61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dc47777bb-hzl5d" Jan 14 01:21:16.855301 kubelet[4019]: E0114 01:21:16.855264 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dc47777bb-hzl5d_calico-apiserver(81ae1783-805d-45cb-a9d3-21a22f1883e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dc47777bb-hzl5d_calico-apiserver(81ae1783-805d-45cb-a9d3-21a22f1883e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b02ea7cd461c666921e19501ee0e03e72e3b8c605814bda163785827a0e28b61\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dc47777bb-hzl5d" podUID="81ae1783-805d-45cb-a9d3-21a22f1883e1" Jan 14 01:21:16.859086 containerd[2507]: time="2026-01-14T01:21:16.859053457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 14 01:21:16.893741 containerd[2507]: time="2026-01-14T01:21:16.893657517Z" level=error msg="Failed to destroy network for sandbox \"c53c8dfdbe19fc031b9d1161bbfa424a7be51169e2af4c117a78efea10db5f7a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:16.904817 containerd[2507]: time="2026-01-14T01:21:16.904784727Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-q7p7k,Uid:2bd1a43b-e98f-4a1f-8c59-f0c6872188ff,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c53c8dfdbe19fc031b9d1161bbfa424a7be51169e2af4c117a78efea10db5f7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:16.905551 kubelet[4019]: E0114 01:21:16.905008 4019 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c53c8dfdbe19fc031b9d1161bbfa424a7be51169e2af4c117a78efea10db5f7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:16.905551 kubelet[4019]: E0114 01:21:16.905528 4019 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c53c8dfdbe19fc031b9d1161bbfa424a7be51169e2af4c117a78efea10db5f7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-q7p7k" Jan 14 01:21:16.905551 kubelet[4019]: E0114 01:21:16.905549 4019 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c53c8dfdbe19fc031b9d1161bbfa424a7be51169e2af4c117a78efea10db5f7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-q7p7k" Jan 14 01:21:16.905778 kubelet[4019]: E0114 01:21:16.905594 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-q7p7k_calico-system(2bd1a43b-e98f-4a1f-8c59-f0c6872188ff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-q7p7k_calico-system(2bd1a43b-e98f-4a1f-8c59-f0c6872188ff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c53c8dfdbe19fc031b9d1161bbfa424a7be51169e2af4c117a78efea10db5f7a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-q7p7k" podUID="2bd1a43b-e98f-4a1f-8c59-f0c6872188ff" Jan 14 01:21:16.924504 containerd[2507]: time="2026-01-14T01:21:16.924458844Z" level=error msg="Failed to destroy network for sandbox \"472e283bd04d0fb847530294b255e2e4fbe261ea3debb8f62b2bf8ab22d25c4b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:16.940627 containerd[2507]: time="2026-01-14T01:21:16.940586895Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dc47777bb-hmsh5,Uid:4c98e701-5dc1-42d2-b8b2-315dbbe213e6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"472e283bd04d0fb847530294b255e2e4fbe261ea3debb8f62b2bf8ab22d25c4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:16.941087 kubelet[4019]: E0114 01:21:16.940748 4019 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"472e283bd04d0fb847530294b255e2e4fbe261ea3debb8f62b2bf8ab22d25c4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:16.941087 kubelet[4019]: E0114 01:21:16.940787 4019 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"472e283bd04d0fb847530294b255e2e4fbe261ea3debb8f62b2bf8ab22d25c4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dc47777bb-hmsh5" Jan 14 01:21:16.941087 kubelet[4019]: E0114 01:21:16.940805 4019 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"472e283bd04d0fb847530294b255e2e4fbe261ea3debb8f62b2bf8ab22d25c4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-dc47777bb-hmsh5" Jan 14 01:21:16.941209 kubelet[4019]: E0114 01:21:16.940847 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-dc47777bb-hmsh5_calico-apiserver(4c98e701-5dc1-42d2-b8b2-315dbbe213e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-dc47777bb-hmsh5_calico-apiserver(4c98e701-5dc1-42d2-b8b2-315dbbe213e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"472e283bd04d0fb847530294b255e2e4fbe261ea3debb8f62b2bf8ab22d25c4b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-dc47777bb-hmsh5" podUID="4c98e701-5dc1-42d2-b8b2-315dbbe213e6" Jan 14 01:21:16.946301 containerd[2507]: time="2026-01-14T01:21:16.946271754Z" level=error msg="Failed to destroy network for sandbox \"bb3cda4832d1b47fb50e52c64cc4d893a00fab044c392fb16417dadd0d1c33d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:16.953305 containerd[2507]: time="2026-01-14T01:21:16.953277927Z" level=error msg="Failed to destroy network for sandbox \"36454cd75a21adaa7607b312391c3ca69b93f9b9b01491d95bc2efd49aa48174\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:16.953930 containerd[2507]: time="2026-01-14T01:21:16.953874255Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ndtsm,Uid:12820403-9657-4768-aa8a-a08f227bfebe,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb3cda4832d1b47fb50e52c64cc4d893a00fab044c392fb16417dadd0d1c33d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:16.954064 kubelet[4019]: E0114 01:21:16.954042 4019 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb3cda4832d1b47fb50e52c64cc4d893a00fab044c392fb16417dadd0d1c33d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:16.954107 kubelet[4019]: E0114 01:21:16.954078 4019 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb3cda4832d1b47fb50e52c64cc4d893a00fab044c392fb16417dadd0d1c33d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ndtsm" Jan 14 01:21:16.954107 kubelet[4019]: E0114 01:21:16.954095 4019 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb3cda4832d1b47fb50e52c64cc4d893a00fab044c392fb16417dadd0d1c33d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ndtsm" Jan 14 01:21:16.954164 kubelet[4019]: E0114 01:21:16.954137 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-ndtsm_kube-system(12820403-9657-4768-aa8a-a08f227bfebe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-ndtsm_kube-system(12820403-9657-4768-aa8a-a08f227bfebe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bb3cda4832d1b47fb50e52c64cc4d893a00fab044c392fb16417dadd0d1c33d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-ndtsm" podUID="12820403-9657-4768-aa8a-a08f227bfebe" Jan 14 01:21:16.961543 containerd[2507]: time="2026-01-14T01:21:16.961465607Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dd999c47-6xnws,Uid:512e673b-5c39-45b5-b82e-4a4fa2ad3be4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"36454cd75a21adaa7607b312391c3ca69b93f9b9b01491d95bc2efd49aa48174\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:16.961638 kubelet[4019]: E0114 01:21:16.961617 4019 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36454cd75a21adaa7607b312391c3ca69b93f9b9b01491d95bc2efd49aa48174\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:21:16.961681 kubelet[4019]: E0114 01:21:16.961651 4019 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36454cd75a21adaa7607b312391c3ca69b93f9b9b01491d95bc2efd49aa48174\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6dd999c47-6xnws" Jan 14 01:21:16.961705 kubelet[4019]: E0114 01:21:16.961678 4019 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36454cd75a21adaa7607b312391c3ca69b93f9b9b01491d95bc2efd49aa48174\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6dd999c47-6xnws" Jan 14 01:21:16.961746 kubelet[4019]: E0114 01:21:16.961723 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6dd999c47-6xnws_calico-system(512e673b-5c39-45b5-b82e-4a4fa2ad3be4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6dd999c47-6xnws_calico-system(512e673b-5c39-45b5-b82e-4a4fa2ad3be4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"36454cd75a21adaa7607b312391c3ca69b93f9b9b01491d95bc2efd49aa48174\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6dd999c47-6xnws" podUID="512e673b-5c39-45b5-b82e-4a4fa2ad3be4" Jan 14 01:21:20.791726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3895056068.mount: Deactivated successfully. Jan 14 01:21:20.835128 containerd[2507]: time="2026-01-14T01:21:20.835088007Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:21:20.838821 containerd[2507]: time="2026-01-14T01:21:20.838738083Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156882266" Jan 14 01:21:20.843363 containerd[2507]: time="2026-01-14T01:21:20.843335794Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:21:20.846993 containerd[2507]: time="2026-01-14T01:21:20.846940424Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:21:20.847341 containerd[2507]: time="2026-01-14T01:21:20.847222227Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 3.988040305s" Jan 14 01:21:20.847341 containerd[2507]: time="2026-01-14T01:21:20.847249283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 14 01:21:20.862705 containerd[2507]: time="2026-01-14T01:21:20.862683754Z" level=info msg="CreateContainer within sandbox \"29cd7e744eef5d4066ad63cf24ec0a64eb81061616de4d3ea22c90adf5ed9cab\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 14 01:21:20.897715 containerd[2507]: time="2026-01-14T01:21:20.897689248Z" level=info msg="Container 290175fcab3a21eee2767263a5a4fc78889766eff1fbcb561b3feba91dc5fde7: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:21:20.932400 containerd[2507]: time="2026-01-14T01:21:20.932373348Z" level=info msg="CreateContainer within sandbox \"29cd7e744eef5d4066ad63cf24ec0a64eb81061616de4d3ea22c90adf5ed9cab\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"290175fcab3a21eee2767263a5a4fc78889766eff1fbcb561b3feba91dc5fde7\"" Jan 14 01:21:20.933705 containerd[2507]: time="2026-01-14T01:21:20.932921149Z" level=info msg="StartContainer for \"290175fcab3a21eee2767263a5a4fc78889766eff1fbcb561b3feba91dc5fde7\"" Jan 14 01:21:20.934757 containerd[2507]: time="2026-01-14T01:21:20.934724940Z" level=info msg="connecting to shim 290175fcab3a21eee2767263a5a4fc78889766eff1fbcb561b3feba91dc5fde7" address="unix:///run/containerd/s/b2637069f4721836b63a827585d0a9d53da69f2839dac06854a8459904e9c04a" protocol=ttrpc version=3 Jan 14 01:21:20.954675 systemd[1]: Started cri-containerd-290175fcab3a21eee2767263a5a4fc78889766eff1fbcb561b3feba91dc5fde7.scope - libcontainer container 290175fcab3a21eee2767263a5a4fc78889766eff1fbcb561b3feba91dc5fde7. Jan 14 01:21:21.006242 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 14 01:21:21.006320 kernel: audit: type=1334 audit(1768353681.003:602): prog-id=196 op=LOAD Jan 14 01:21:21.003000 audit: BPF prog-id=196 op=LOAD Jan 14 01:21:21.003000 audit[5013]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4556 pid=5013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:21.013429 kernel: audit: type=1300 audit(1768353681.003:602): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4556 pid=5013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:21.013509 kernel: audit: type=1327 audit(1768353681.003:602): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239303137356663616233613231656565323736373236336135613466 Jan 14 01:21:21.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239303137356663616233613231656565323736373236336135613466 Jan 14 01:21:21.003000 audit: BPF prog-id=197 op=LOAD Jan 14 01:21:21.019522 kernel: audit: type=1334 audit(1768353681.003:603): prog-id=197 op=LOAD Jan 14 01:21:21.003000 audit[5013]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4556 pid=5013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:21.027576 kernel: audit: type=1300 audit(1768353681.003:603): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4556 pid=5013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:21.027641 kernel: audit: type=1327 audit(1768353681.003:603): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239303137356663616233613231656565323736373236336135613466 Jan 14 01:21:21.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239303137356663616233613231656565323736373236336135613466 Jan 14 01:21:21.029453 kernel: audit: type=1334 audit(1768353681.003:604): prog-id=197 op=UNLOAD Jan 14 01:21:21.003000 audit: BPF prog-id=197 op=UNLOAD Jan 14 01:21:21.035146 kernel: audit: type=1300 audit(1768353681.003:604): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4556 pid=5013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:21.003000 audit[5013]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4556 pid=5013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:21.038511 kernel: audit: type=1327 audit(1768353681.003:604): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239303137356663616233613231656565323736373236336135613466 Jan 14 01:21:21.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239303137356663616233613231656565323736373236336135613466 Jan 14 01:21:21.003000 audit: BPF prog-id=196 op=UNLOAD Jan 14 01:21:21.051130 kernel: audit: type=1334 audit(1768353681.003:605): prog-id=196 op=UNLOAD Jan 14 01:21:21.003000 audit[5013]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4556 pid=5013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:21.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239303137356663616233613231656565323736373236336135613466 Jan 14 01:21:21.003000 audit: BPF prog-id=198 op=LOAD Jan 14 01:21:21.003000 audit[5013]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4556 pid=5013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:21.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239303137356663616233613231656565323736373236336135613466 Jan 14 01:21:21.055867 containerd[2507]: time="2026-01-14T01:21:21.055811512Z" level=info msg="StartContainer for \"290175fcab3a21eee2767263a5a4fc78889766eff1fbcb561b3feba91dc5fde7\" returns successfully" Jan 14 01:21:21.295202 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 14 01:21:21.295271 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 14 01:21:21.467103 kubelet[4019]: I0114 01:21:21.466952 4019 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/512e673b-5c39-45b5-b82e-4a4fa2ad3be4-whisker-backend-key-pair\") pod \"512e673b-5c39-45b5-b82e-4a4fa2ad3be4\" (UID: \"512e673b-5c39-45b5-b82e-4a4fa2ad3be4\") " Jan 14 01:21:21.467103 kubelet[4019]: I0114 01:21:21.467077 4019 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/512e673b-5c39-45b5-b82e-4a4fa2ad3be4-whisker-ca-bundle\") pod \"512e673b-5c39-45b5-b82e-4a4fa2ad3be4\" (UID: \"512e673b-5c39-45b5-b82e-4a4fa2ad3be4\") " Jan 14 01:21:21.467617 kubelet[4019]: I0114 01:21:21.467097 4019 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9ssv\" (UniqueName: \"kubernetes.io/projected/512e673b-5c39-45b5-b82e-4a4fa2ad3be4-kube-api-access-p9ssv\") pod \"512e673b-5c39-45b5-b82e-4a4fa2ad3be4\" (UID: \"512e673b-5c39-45b5-b82e-4a4fa2ad3be4\") " Jan 14 01:21:21.470319 kubelet[4019]: I0114 01:21:21.470293 4019 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/512e673b-5c39-45b5-b82e-4a4fa2ad3be4-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "512e673b-5c39-45b5-b82e-4a4fa2ad3be4" (UID: "512e673b-5c39-45b5-b82e-4a4fa2ad3be4"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 14 01:21:21.471606 kubelet[4019]: I0114 01:21:21.471474 4019 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/512e673b-5c39-45b5-b82e-4a4fa2ad3be4-kube-api-access-p9ssv" (OuterVolumeSpecName: "kube-api-access-p9ssv") pod "512e673b-5c39-45b5-b82e-4a4fa2ad3be4" (UID: "512e673b-5c39-45b5-b82e-4a4fa2ad3be4"). InnerVolumeSpecName "kube-api-access-p9ssv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 14 01:21:21.471987 kubelet[4019]: I0114 01:21:21.471967 4019 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/512e673b-5c39-45b5-b82e-4a4fa2ad3be4-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "512e673b-5c39-45b5-b82e-4a4fa2ad3be4" (UID: "512e673b-5c39-45b5-b82e-4a4fa2ad3be4"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 14 01:21:21.568622 kubelet[4019]: I0114 01:21:21.568581 4019 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/512e673b-5c39-45b5-b82e-4a4fa2ad3be4-whisker-ca-bundle\") on node \"ci-4578.0.0-p-9807086b3c\" DevicePath \"\"" Jan 14 01:21:21.568622 kubelet[4019]: I0114 01:21:21.568601 4019 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p9ssv\" (UniqueName: \"kubernetes.io/projected/512e673b-5c39-45b5-b82e-4a4fa2ad3be4-kube-api-access-p9ssv\") on node \"ci-4578.0.0-p-9807086b3c\" DevicePath \"\"" Jan 14 01:21:21.568622 kubelet[4019]: I0114 01:21:21.568608 4019 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/512e673b-5c39-45b5-b82e-4a4fa2ad3be4-whisker-backend-key-pair\") on node \"ci-4578.0.0-p-9807086b3c\" DevicePath \"\"" Jan 14 01:21:21.791693 systemd[1]: var-lib-kubelet-pods-512e673b\x2d5c39\x2d45b5\x2db82e\x2d4a4fa2ad3be4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp9ssv.mount: Deactivated successfully. Jan 14 01:21:21.791799 systemd[1]: var-lib-kubelet-pods-512e673b\x2d5c39\x2d45b5\x2db82e\x2d4a4fa2ad3be4-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 14 01:21:21.875241 systemd[1]: Removed slice kubepods-besteffort-pod512e673b_5c39_45b5_b82e_4a4fa2ad3be4.slice - libcontainer container kubepods-besteffort-pod512e673b_5c39_45b5_b82e_4a4fa2ad3be4.slice. Jan 14 01:21:21.890973 kubelet[4019]: I0114 01:21:21.890925 4019 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-d8gs4" podStartSLOduration=1.610125584 podStartE2EDuration="17.890909318s" podCreationTimestamp="2026-01-14 01:21:04 +0000 UTC" firstStartedPulling="2026-01-14 01:21:04.567062757 +0000 UTC m=+19.922103126" lastFinishedPulling="2026-01-14 01:21:20.847846496 +0000 UTC m=+36.202886860" observedRunningTime="2026-01-14 01:21:21.890830304 +0000 UTC m=+37.245870683" watchObservedRunningTime="2026-01-14 01:21:21.890909318 +0000 UTC m=+37.245949694" Jan 14 01:21:21.970027 systemd[1]: Created slice kubepods-besteffort-podce22a880_122c_47be_98f6_68b9053dbdfd.slice - libcontainer container kubepods-besteffort-podce22a880_122c_47be_98f6_68b9053dbdfd.slice. Jan 14 01:21:22.073119 kubelet[4019]: I0114 01:21:22.073015 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce22a880-122c-47be-98f6-68b9053dbdfd-whisker-ca-bundle\") pod \"whisker-768b846847-hs7j8\" (UID: \"ce22a880-122c-47be-98f6-68b9053dbdfd\") " pod="calico-system/whisker-768b846847-hs7j8" Jan 14 01:21:22.073119 kubelet[4019]: I0114 01:21:22.073048 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ce22a880-122c-47be-98f6-68b9053dbdfd-whisker-backend-key-pair\") pod \"whisker-768b846847-hs7j8\" (UID: \"ce22a880-122c-47be-98f6-68b9053dbdfd\") " pod="calico-system/whisker-768b846847-hs7j8" Jan 14 01:21:22.073119 kubelet[4019]: I0114 01:21:22.073070 4019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pkzp\" (UniqueName: \"kubernetes.io/projected/ce22a880-122c-47be-98f6-68b9053dbdfd-kube-api-access-6pkzp\") pod \"whisker-768b846847-hs7j8\" (UID: \"ce22a880-122c-47be-98f6-68b9053dbdfd\") " pod="calico-system/whisker-768b846847-hs7j8" Jan 14 01:21:22.281996 containerd[2507]: time="2026-01-14T01:21:22.281957490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-768b846847-hs7j8,Uid:ce22a880-122c-47be-98f6-68b9053dbdfd,Namespace:calico-system,Attempt:0,}" Jan 14 01:21:22.401151 systemd-networkd[2133]: cali4fc150f9103: Link UP Jan 14 01:21:22.402678 systemd-networkd[2133]: cali4fc150f9103: Gained carrier Jan 14 01:21:22.417236 containerd[2507]: 2026-01-14 01:21:22.310 [INFO][5102] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 01:21:22.417236 containerd[2507]: 2026-01-14 01:21:22.317 [INFO][5102] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578.0.0--p--9807086b3c-k8s-whisker--768b846847--hs7j8-eth0 whisker-768b846847- calico-system ce22a880-122c-47be-98f6-68b9053dbdfd 882 0 2026-01-14 01:21:21 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:768b846847 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4578.0.0-p-9807086b3c whisker-768b846847-hs7j8 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali4fc150f9103 [] [] }} ContainerID="4fff767074d784d9738f5a84c250f979cf93f458198de0ebc28c1213fb631228" Namespace="calico-system" Pod="whisker-768b846847-hs7j8" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-whisker--768b846847--hs7j8-" Jan 14 01:21:22.417236 containerd[2507]: 2026-01-14 01:21:22.317 [INFO][5102] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4fff767074d784d9738f5a84c250f979cf93f458198de0ebc28c1213fb631228" Namespace="calico-system" Pod="whisker-768b846847-hs7j8" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-whisker--768b846847--hs7j8-eth0" Jan 14 01:21:22.417236 containerd[2507]: 2026-01-14 01:21:22.336 [INFO][5114] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4fff767074d784d9738f5a84c250f979cf93f458198de0ebc28c1213fb631228" HandleID="k8s-pod-network.4fff767074d784d9738f5a84c250f979cf93f458198de0ebc28c1213fb631228" Workload="ci--4578.0.0--p--9807086b3c-k8s-whisker--768b846847--hs7j8-eth0" Jan 14 01:21:22.417471 containerd[2507]: 2026-01-14 01:21:22.336 [INFO][5114] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4fff767074d784d9738f5a84c250f979cf93f458198de0ebc28c1213fb631228" HandleID="k8s-pod-network.4fff767074d784d9738f5a84c250f979cf93f458198de0ebc28c1213fb631228" Workload="ci--4578.0.0--p--9807086b3c-k8s-whisker--768b846847--hs7j8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f120), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4578.0.0-p-9807086b3c", "pod":"whisker-768b846847-hs7j8", "timestamp":"2026-01-14 01:21:22.336207744 +0000 UTC"}, Hostname:"ci-4578.0.0-p-9807086b3c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:21:22.417471 containerd[2507]: 2026-01-14 01:21:22.336 [INFO][5114] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:21:22.417471 containerd[2507]: 2026-01-14 01:21:22.336 [INFO][5114] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:21:22.417471 containerd[2507]: 2026-01-14 01:21:22.336 [INFO][5114] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578.0.0-p-9807086b3c' Jan 14 01:21:22.417471 containerd[2507]: 2026-01-14 01:21:22.341 [INFO][5114] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4fff767074d784d9738f5a84c250f979cf93f458198de0ebc28c1213fb631228" host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:22.417471 containerd[2507]: 2026-01-14 01:21:22.344 [INFO][5114] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:22.417471 containerd[2507]: 2026-01-14 01:21:22.347 [INFO][5114] ipam/ipam.go 511: Trying affinity for 192.168.63.64/26 host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:22.417471 containerd[2507]: 2026-01-14 01:21:22.348 [INFO][5114] ipam/ipam.go 158: Attempting to load block cidr=192.168.63.64/26 host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:22.417471 containerd[2507]: 2026-01-14 01:21:22.350 [INFO][5114] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.63.64/26 host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:22.417972 containerd[2507]: 2026-01-14 01:21:22.350 [INFO][5114] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.63.64/26 handle="k8s-pod-network.4fff767074d784d9738f5a84c250f979cf93f458198de0ebc28c1213fb631228" host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:22.417972 containerd[2507]: 2026-01-14 01:21:22.351 [INFO][5114] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4fff767074d784d9738f5a84c250f979cf93f458198de0ebc28c1213fb631228 Jan 14 01:21:22.417972 containerd[2507]: 2026-01-14 01:21:22.358 [INFO][5114] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.63.64/26 handle="k8s-pod-network.4fff767074d784d9738f5a84c250f979cf93f458198de0ebc28c1213fb631228" host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:22.417972 containerd[2507]: 2026-01-14 01:21:22.362 [INFO][5114] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.63.65/26] block=192.168.63.64/26 handle="k8s-pod-network.4fff767074d784d9738f5a84c250f979cf93f458198de0ebc28c1213fb631228" host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:22.417972 containerd[2507]: 2026-01-14 01:21:22.362 [INFO][5114] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.63.65/26] handle="k8s-pod-network.4fff767074d784d9738f5a84c250f979cf93f458198de0ebc28c1213fb631228" host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:22.417972 containerd[2507]: 2026-01-14 01:21:22.362 [INFO][5114] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:21:22.417972 containerd[2507]: 2026-01-14 01:21:22.362 [INFO][5114] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.63.65/26] IPv6=[] ContainerID="4fff767074d784d9738f5a84c250f979cf93f458198de0ebc28c1213fb631228" HandleID="k8s-pod-network.4fff767074d784d9738f5a84c250f979cf93f458198de0ebc28c1213fb631228" Workload="ci--4578.0.0--p--9807086b3c-k8s-whisker--768b846847--hs7j8-eth0" Jan 14 01:21:22.418120 containerd[2507]: 2026-01-14 01:21:22.365 [INFO][5102] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4fff767074d784d9738f5a84c250f979cf93f458198de0ebc28c1213fb631228" Namespace="calico-system" Pod="whisker-768b846847-hs7j8" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-whisker--768b846847--hs7j8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--9807086b3c-k8s-whisker--768b846847--hs7j8-eth0", GenerateName:"whisker-768b846847-", Namespace:"calico-system", SelfLink:"", UID:"ce22a880-122c-47be-98f6-68b9053dbdfd", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 21, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"768b846847", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-9807086b3c", ContainerID:"", Pod:"whisker-768b846847-hs7j8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.63.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4fc150f9103", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:21:22.418120 containerd[2507]: 2026-01-14 01:21:22.365 [INFO][5102] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.63.65/32] ContainerID="4fff767074d784d9738f5a84c250f979cf93f458198de0ebc28c1213fb631228" Namespace="calico-system" Pod="whisker-768b846847-hs7j8" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-whisker--768b846847--hs7j8-eth0" Jan 14 01:21:22.418212 containerd[2507]: 2026-01-14 01:21:22.365 [INFO][5102] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4fc150f9103 ContainerID="4fff767074d784d9738f5a84c250f979cf93f458198de0ebc28c1213fb631228" Namespace="calico-system" Pod="whisker-768b846847-hs7j8" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-whisker--768b846847--hs7j8-eth0" Jan 14 01:21:22.418212 containerd[2507]: 2026-01-14 01:21:22.403 [INFO][5102] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4fff767074d784d9738f5a84c250f979cf93f458198de0ebc28c1213fb631228" Namespace="calico-system" Pod="whisker-768b846847-hs7j8" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-whisker--768b846847--hs7j8-eth0" Jan 14 01:21:22.418263 containerd[2507]: 2026-01-14 01:21:22.403 [INFO][5102] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4fff767074d784d9738f5a84c250f979cf93f458198de0ebc28c1213fb631228" Namespace="calico-system" Pod="whisker-768b846847-hs7j8" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-whisker--768b846847--hs7j8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--9807086b3c-k8s-whisker--768b846847--hs7j8-eth0", GenerateName:"whisker-768b846847-", Namespace:"calico-system", SelfLink:"", UID:"ce22a880-122c-47be-98f6-68b9053dbdfd", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 21, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"768b846847", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-9807086b3c", ContainerID:"4fff767074d784d9738f5a84c250f979cf93f458198de0ebc28c1213fb631228", Pod:"whisker-768b846847-hs7j8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.63.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4fc150f9103", MAC:"82:12:cf:96:88:d9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:21:22.418326 containerd[2507]: 2026-01-14 01:21:22.414 [INFO][5102] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4fff767074d784d9738f5a84c250f979cf93f458198de0ebc28c1213fb631228" Namespace="calico-system" Pod="whisker-768b846847-hs7j8" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-whisker--768b846847--hs7j8-eth0" Jan 14 01:21:22.486382 containerd[2507]: time="2026-01-14T01:21:22.486349542Z" level=info msg="connecting to shim 4fff767074d784d9738f5a84c250f979cf93f458198de0ebc28c1213fb631228" address="unix:///run/containerd/s/9d0db05318750bdd7e73324e4df00dc2b4a9e32c960be80599dbb5a56c220f3a" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:21:22.505685 systemd[1]: Started cri-containerd-4fff767074d784d9738f5a84c250f979cf93f458198de0ebc28c1213fb631228.scope - libcontainer container 4fff767074d784d9738f5a84c250f979cf93f458198de0ebc28c1213fb631228. Jan 14 01:21:22.513000 audit: BPF prog-id=199 op=LOAD Jan 14 01:21:22.514000 audit: BPF prog-id=200 op=LOAD Jan 14 01:21:22.514000 audit[5148]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=5137 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:22.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466666637363730373464373834643937333866356138346332353066 Jan 14 01:21:22.514000 audit: BPF prog-id=200 op=UNLOAD Jan 14 01:21:22.514000 audit[5148]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5137 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:22.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466666637363730373464373834643937333866356138346332353066 Jan 14 01:21:22.514000 audit: BPF prog-id=201 op=LOAD Jan 14 01:21:22.514000 audit[5148]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=5137 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:22.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466666637363730373464373834643937333866356138346332353066 Jan 14 01:21:22.514000 audit: BPF prog-id=202 op=LOAD Jan 14 01:21:22.514000 audit[5148]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=5137 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:22.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466666637363730373464373834643937333866356138346332353066 Jan 14 01:21:22.514000 audit: BPF prog-id=202 op=UNLOAD Jan 14 01:21:22.514000 audit[5148]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5137 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:22.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466666637363730373464373834643937333866356138346332353066 Jan 14 01:21:22.514000 audit: BPF prog-id=201 op=UNLOAD Jan 14 01:21:22.514000 audit[5148]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5137 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:22.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466666637363730373464373834643937333866356138346332353066 Jan 14 01:21:22.514000 audit: BPF prog-id=203 op=LOAD Jan 14 01:21:22.514000 audit[5148]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=5137 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:22.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466666637363730373464373834643937333866356138346332353066 Jan 14 01:21:22.548160 containerd[2507]: time="2026-01-14T01:21:22.547938565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-768b846847-hs7j8,Uid:ce22a880-122c-47be-98f6-68b9053dbdfd,Namespace:calico-system,Attempt:0,} returns sandbox id \"4fff767074d784d9738f5a84c250f979cf93f458198de0ebc28c1213fb631228\"" Jan 14 01:21:22.552266 containerd[2507]: time="2026-01-14T01:21:22.552246742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:21:22.742181 kubelet[4019]: I0114 01:21:22.741933 4019 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="512e673b-5c39-45b5-b82e-4a4fa2ad3be4" path="/var/lib/kubelet/pods/512e673b-5c39-45b5-b82e-4a4fa2ad3be4/volumes" Jan 14 01:21:22.822679 containerd[2507]: time="2026-01-14T01:21:22.822642989Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:21:22.826079 containerd[2507]: time="2026-01-14T01:21:22.826036321Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:21:22.826239 containerd[2507]: time="2026-01-14T01:21:22.826108029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:21:22.826285 kubelet[4019]: E0114 01:21:22.826205 4019 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:21:22.826285 kubelet[4019]: E0114 01:21:22.826253 4019 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:21:22.826541 kubelet[4019]: E0114 01:21:22.826340 4019 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-768b846847-hs7j8_calico-system(ce22a880-122c-47be-98f6-68b9053dbdfd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:21:22.827412 containerd[2507]: time="2026-01-14T01:21:22.827389393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:21:23.087041 containerd[2507]: time="2026-01-14T01:21:23.086936701Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:21:23.089795 containerd[2507]: time="2026-01-14T01:21:23.089764917Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:21:23.089902 containerd[2507]: time="2026-01-14T01:21:23.089830727Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:21:23.089988 kubelet[4019]: E0114 01:21:23.089948 4019 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:21:23.090030 kubelet[4019]: E0114 01:21:23.089993 4019 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:21:23.090117 kubelet[4019]: E0114 01:21:23.090102 4019 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-768b846847-hs7j8_calico-system(ce22a880-122c-47be-98f6-68b9053dbdfd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:21:23.090209 kubelet[4019]: E0114 01:21:23.090144 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-768b846847-hs7j8" podUID="ce22a880-122c-47be-98f6-68b9053dbdfd" Jan 14 01:21:23.875575 kubelet[4019]: E0114 01:21:23.875477 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-768b846847-hs7j8" podUID="ce22a880-122c-47be-98f6-68b9053dbdfd" Jan 14 01:21:23.902000 audit[5312]: NETFILTER_CFG table=filter:118 family=2 entries=22 op=nft_register_rule pid=5312 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:23.902000 audit[5312]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd8fc95770 a2=0 a3=7ffd8fc9575c items=0 ppid=4124 pid=5312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.902000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:23.909000 audit[5312]: NETFILTER_CFG table=nat:119 family=2 entries=12 op=nft_register_rule pid=5312 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:23.909000 audit[5312]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd8fc95770 a2=0 a3=0 items=0 ppid=4124 pid=5312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:23.909000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:24.227622 systemd-networkd[2133]: cali4fc150f9103: Gained IPv6LL Jan 14 01:21:26.145099 kubelet[4019]: I0114 01:21:26.144928 4019 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 01:21:26.177000 audit[5359]: NETFILTER_CFG table=filter:120 family=2 entries=21 op=nft_register_rule pid=5359 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:26.179113 kernel: kauditd_printk_skb: 33 callbacks suppressed Jan 14 01:21:26.179187 kernel: audit: type=1325 audit(1768353686.177:617): table=filter:120 family=2 entries=21 op=nft_register_rule pid=5359 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:26.187769 kernel: audit: type=1300 audit(1768353686.177:617): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc2fcd21b0 a2=0 a3=7ffc2fcd219c items=0 ppid=4124 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:26.177000 audit[5359]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc2fcd21b0 a2=0 a3=7ffc2fcd219c items=0 ppid=4124 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:26.177000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:26.189503 kernel: audit: type=1327 audit(1768353686.177:617): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:26.187000 audit[5359]: NETFILTER_CFG table=nat:121 family=2 entries=19 op=nft_register_chain pid=5359 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:26.191876 kernel: audit: type=1325 audit(1768353686.187:618): table=nat:121 family=2 entries=19 op=nft_register_chain pid=5359 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:26.187000 audit[5359]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc2fcd21b0 a2=0 a3=7ffc2fcd219c items=0 ppid=4124 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:26.196509 kernel: audit: type=1300 audit(1768353686.187:618): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc2fcd21b0 a2=0 a3=7ffc2fcd219c items=0 ppid=4124 pid=5359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:26.187000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:26.199328 kernel: audit: type=1327 audit(1768353686.187:618): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:26.970000 audit: BPF prog-id=204 op=LOAD Jan 14 01:21:26.972515 kernel: audit: type=1334 audit(1768353686.970:619): prog-id=204 op=LOAD Jan 14 01:21:26.972602 kernel: audit: type=1300 audit(1768353686.970:619): arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffce99e4b30 a2=98 a3=1fffffffffffffff items=0 ppid=5362 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:26.970000 audit[5396]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffce99e4b30 a2=98 a3=1fffffffffffffff items=0 ppid=5362 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:26.970000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:21:26.983534 kernel: audit: type=1327 audit(1768353686.970:619): proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:21:26.970000 audit: BPF prog-id=204 op=UNLOAD Jan 14 01:21:26.970000 audit[5396]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffce99e4b00 a3=0 items=0 ppid=5362 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:26.970000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:21:26.970000 audit: BPF prog-id=205 op=LOAD Jan 14 01:21:26.988523 kernel: audit: type=1334 audit(1768353686.970:620): prog-id=204 op=UNLOAD Jan 14 01:21:26.970000 audit[5396]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffce99e4a10 a2=94 a3=3 items=0 ppid=5362 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:26.970000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:21:26.970000 audit: BPF prog-id=205 op=UNLOAD Jan 14 01:21:26.970000 audit[5396]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffce99e4a10 a2=94 a3=3 items=0 ppid=5362 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:26.970000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:21:26.970000 audit: BPF prog-id=206 op=LOAD Jan 14 01:21:26.970000 audit[5396]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffce99e4a50 a2=94 a3=7ffce99e4c30 items=0 ppid=5362 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:26.970000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:21:26.972000 audit: BPF prog-id=206 op=UNLOAD Jan 14 01:21:26.972000 audit[5396]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffce99e4a50 a2=94 a3=7ffce99e4c30 items=0 ppid=5362 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:26.972000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:21:26.978000 audit: BPF prog-id=207 op=LOAD Jan 14 01:21:26.978000 audit[5397]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeb16d0f70 a2=98 a3=3 items=0 ppid=5362 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:26.978000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:21:26.978000 audit: BPF prog-id=207 op=UNLOAD Jan 14 01:21:26.978000 audit[5397]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffeb16d0f40 a3=0 items=0 ppid=5362 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:26.978000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:21:26.983000 audit: BPF prog-id=208 op=LOAD Jan 14 01:21:26.983000 audit[5397]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffeb16d0d60 a2=94 a3=54428f items=0 ppid=5362 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:26.983000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:21:26.983000 audit: BPF prog-id=208 op=UNLOAD Jan 14 01:21:26.983000 audit[5397]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffeb16d0d60 a2=94 a3=54428f items=0 ppid=5362 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:26.983000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:21:26.983000 audit: BPF prog-id=209 op=LOAD Jan 14 01:21:26.983000 audit[5397]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffeb16d0d90 a2=94 a3=2 items=0 ppid=5362 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:26.983000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:21:26.983000 audit: BPF prog-id=209 op=UNLOAD Jan 14 01:21:26.983000 audit[5397]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffeb16d0d90 a2=0 a3=2 items=0 ppid=5362 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:26.983000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:21:27.165000 audit: BPF prog-id=210 op=LOAD Jan 14 01:21:27.165000 audit[5397]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffeb16d0c50 a2=94 a3=1 items=0 ppid=5362 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.165000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:21:27.165000 audit: BPF prog-id=210 op=UNLOAD Jan 14 01:21:27.165000 audit[5397]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffeb16d0c50 a2=94 a3=1 items=0 ppid=5362 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.165000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:21:27.173000 audit: BPF prog-id=211 op=LOAD Jan 14 01:21:27.173000 audit[5397]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffeb16d0c40 a2=94 a3=4 items=0 ppid=5362 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.173000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:21:27.173000 audit: BPF prog-id=211 op=UNLOAD Jan 14 01:21:27.173000 audit[5397]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffeb16d0c40 a2=0 a3=4 items=0 ppid=5362 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.173000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:21:27.174000 audit: BPF prog-id=212 op=LOAD Jan 14 01:21:27.174000 audit[5397]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeb16d0aa0 a2=94 a3=5 items=0 ppid=5362 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.174000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:21:27.174000 audit: BPF prog-id=212 op=UNLOAD Jan 14 01:21:27.174000 audit[5397]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffeb16d0aa0 a2=0 a3=5 items=0 ppid=5362 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.174000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:21:27.174000 audit: BPF prog-id=213 op=LOAD Jan 14 01:21:27.174000 audit[5397]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffeb16d0cc0 a2=94 a3=6 items=0 ppid=5362 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.174000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:21:27.174000 audit: BPF prog-id=213 op=UNLOAD Jan 14 01:21:27.174000 audit[5397]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffeb16d0cc0 a2=0 a3=6 items=0 ppid=5362 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.174000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:21:27.174000 audit: BPF prog-id=214 op=LOAD Jan 14 01:21:27.174000 audit[5397]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffeb16d0470 a2=94 a3=88 items=0 ppid=5362 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.174000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:21:27.174000 audit: BPF prog-id=215 op=LOAD Jan 14 01:21:27.174000 audit[5397]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffeb16d02f0 a2=94 a3=2 items=0 ppid=5362 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.174000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:21:27.174000 audit: BPF prog-id=215 op=UNLOAD Jan 14 01:21:27.174000 audit[5397]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffeb16d0320 a2=0 a3=7ffeb16d0420 items=0 ppid=5362 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.174000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:21:27.174000 audit: BPF prog-id=214 op=UNLOAD Jan 14 01:21:27.174000 audit[5397]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=29ba4d10 a2=0 a3=fd1df3db0c6a4233 items=0 ppid=5362 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.174000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:21:27.180000 audit: BPF prog-id=216 op=LOAD Jan 14 01:21:27.180000 audit[5420]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff41d32380 a2=98 a3=1999999999999999 items=0 ppid=5362 pid=5420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.180000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:21:27.180000 audit: BPF prog-id=216 op=UNLOAD Jan 14 01:21:27.180000 audit[5420]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff41d32350 a3=0 items=0 ppid=5362 pid=5420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.180000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:21:27.180000 audit: BPF prog-id=217 op=LOAD Jan 14 01:21:27.180000 audit[5420]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff41d32260 a2=94 a3=ffff items=0 ppid=5362 pid=5420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.180000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:21:27.180000 audit: BPF prog-id=217 op=UNLOAD Jan 14 01:21:27.180000 audit[5420]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff41d32260 a2=94 a3=ffff items=0 ppid=5362 pid=5420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.180000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:21:27.180000 audit: BPF prog-id=218 op=LOAD Jan 14 01:21:27.180000 audit[5420]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff41d322a0 a2=94 a3=7fff41d32480 items=0 ppid=5362 pid=5420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.180000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:21:27.180000 audit: BPF prog-id=218 op=UNLOAD Jan 14 01:21:27.180000 audit[5420]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff41d322a0 a2=94 a3=7fff41d32480 items=0 ppid=5362 pid=5420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.180000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:21:27.273702 systemd-networkd[2133]: vxlan.calico: Link UP Jan 14 01:21:27.273708 systemd-networkd[2133]: vxlan.calico: Gained carrier Jan 14 01:21:27.292000 audit: BPF prog-id=219 op=LOAD Jan 14 01:21:27.292000 audit[5447]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffc6123060 a2=98 a3=0 items=0 ppid=5362 pid=5447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.292000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:21:27.292000 audit: BPF prog-id=219 op=UNLOAD Jan 14 01:21:27.292000 audit[5447]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fffc6123030 a3=0 items=0 ppid=5362 pid=5447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.292000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:21:27.292000 audit: BPF prog-id=220 op=LOAD Jan 14 01:21:27.292000 audit[5447]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffc6122e70 a2=94 a3=54428f items=0 ppid=5362 pid=5447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.292000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:21:27.292000 audit: BPF prog-id=220 op=UNLOAD Jan 14 01:21:27.292000 audit[5447]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffc6122e70 a2=94 a3=54428f items=0 ppid=5362 pid=5447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.292000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:21:27.292000 audit: BPF prog-id=221 op=LOAD Jan 14 01:21:27.292000 audit[5447]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffc6122ea0 a2=94 a3=2 items=0 ppid=5362 pid=5447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.292000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:21:27.292000 audit: BPF prog-id=221 op=UNLOAD Jan 14 01:21:27.292000 audit[5447]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffc6122ea0 a2=0 a3=2 items=0 ppid=5362 pid=5447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.292000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:21:27.292000 audit: BPF prog-id=222 op=LOAD Jan 14 01:21:27.292000 audit[5447]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffc6122c50 a2=94 a3=4 items=0 ppid=5362 pid=5447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.292000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:21:27.292000 audit: BPF prog-id=222 op=UNLOAD Jan 14 01:21:27.292000 audit[5447]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fffc6122c50 a2=94 a3=4 items=0 ppid=5362 pid=5447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.292000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:21:27.292000 audit: BPF prog-id=223 op=LOAD Jan 14 01:21:27.292000 audit[5447]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffc6122d50 a2=94 a3=7fffc6122ed0 items=0 ppid=5362 pid=5447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.292000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:21:27.292000 audit: BPF prog-id=223 op=UNLOAD Jan 14 01:21:27.292000 audit[5447]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fffc6122d50 a2=0 a3=7fffc6122ed0 items=0 ppid=5362 pid=5447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.292000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:21:27.294000 audit: BPF prog-id=224 op=LOAD Jan 14 01:21:27.294000 audit[5447]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffc6122480 a2=94 a3=2 items=0 ppid=5362 pid=5447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.294000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:21:27.294000 audit: BPF prog-id=224 op=UNLOAD Jan 14 01:21:27.294000 audit[5447]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fffc6122480 a2=0 a3=2 items=0 ppid=5362 pid=5447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.294000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:21:27.294000 audit: BPF prog-id=225 op=LOAD Jan 14 01:21:27.294000 audit[5447]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffc6122580 a2=94 a3=30 items=0 ppid=5362 pid=5447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.294000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:21:27.303000 audit: BPF prog-id=226 op=LOAD Jan 14 01:21:27.303000 audit[5454]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffd8bac490 a2=98 a3=0 items=0 ppid=5362 pid=5454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.303000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:21:27.303000 audit: BPF prog-id=226 op=UNLOAD Jan 14 01:21:27.303000 audit[5454]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fffd8bac460 a3=0 items=0 ppid=5362 pid=5454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.303000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:21:27.303000 audit: BPF prog-id=227 op=LOAD Jan 14 01:21:27.303000 audit[5454]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffd8bac280 a2=94 a3=54428f items=0 ppid=5362 pid=5454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.303000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:21:27.303000 audit: BPF prog-id=227 op=UNLOAD Jan 14 01:21:27.303000 audit[5454]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffd8bac280 a2=94 a3=54428f items=0 ppid=5362 pid=5454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.303000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:21:27.303000 audit: BPF prog-id=228 op=LOAD Jan 14 01:21:27.303000 audit[5454]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffd8bac2b0 a2=94 a3=2 items=0 ppid=5362 pid=5454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.303000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:21:27.303000 audit: BPF prog-id=228 op=UNLOAD Jan 14 01:21:27.303000 audit[5454]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffd8bac2b0 a2=0 a3=2 items=0 ppid=5362 pid=5454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.303000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:21:27.415000 audit: BPF prog-id=229 op=LOAD Jan 14 01:21:27.415000 audit[5454]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffd8bac170 a2=94 a3=1 items=0 ppid=5362 pid=5454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.415000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:21:27.415000 audit: BPF prog-id=229 op=UNLOAD Jan 14 01:21:27.415000 audit[5454]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffd8bac170 a2=94 a3=1 items=0 ppid=5362 pid=5454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.415000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:21:27.423000 audit: BPF prog-id=230 op=LOAD Jan 14 01:21:27.423000 audit[5454]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffd8bac160 a2=94 a3=4 items=0 ppid=5362 pid=5454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.423000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:21:27.423000 audit: BPF prog-id=230 op=UNLOAD Jan 14 01:21:27.423000 audit[5454]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fffd8bac160 a2=0 a3=4 items=0 ppid=5362 pid=5454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.423000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:21:27.423000 audit: BPF prog-id=231 op=LOAD Jan 14 01:21:27.423000 audit[5454]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffd8babfc0 a2=94 a3=5 items=0 ppid=5362 pid=5454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.423000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:21:27.423000 audit: BPF prog-id=231 op=UNLOAD Jan 14 01:21:27.423000 audit[5454]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fffd8babfc0 a2=0 a3=5 items=0 ppid=5362 pid=5454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.423000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:21:27.423000 audit: BPF prog-id=232 op=LOAD Jan 14 01:21:27.423000 audit[5454]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffd8bac1e0 a2=94 a3=6 items=0 ppid=5362 pid=5454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.423000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:21:27.423000 audit: BPF prog-id=232 op=UNLOAD Jan 14 01:21:27.423000 audit[5454]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fffd8bac1e0 a2=0 a3=6 items=0 ppid=5362 pid=5454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.423000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:21:27.423000 audit: BPF prog-id=233 op=LOAD Jan 14 01:21:27.423000 audit[5454]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffd8bab990 a2=94 a3=88 items=0 ppid=5362 pid=5454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.423000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:21:27.423000 audit: BPF prog-id=234 op=LOAD Jan 14 01:21:27.423000 audit[5454]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fffd8bab810 a2=94 a3=2 items=0 ppid=5362 pid=5454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.423000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:21:27.423000 audit: BPF prog-id=234 op=UNLOAD Jan 14 01:21:27.423000 audit[5454]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fffd8bab840 a2=0 a3=7fffd8bab940 items=0 ppid=5362 pid=5454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.423000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:21:27.424000 audit: BPF prog-id=233 op=UNLOAD Jan 14 01:21:27.424000 audit[5454]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=1405ad10 a2=0 a3=ac51f00cb80846b6 items=0 ppid=5362 pid=5454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.424000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:21:27.428000 audit: BPF prog-id=225 op=UNLOAD Jan 14 01:21:27.428000 audit[5362]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c0007e4400 a2=0 a3=0 items=0 ppid=5180 pid=5362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.428000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 14 01:21:27.497000 audit[5478]: NETFILTER_CFG table=mangle:122 family=2 entries=16 op=nft_register_chain pid=5478 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:21:27.497000 audit[5478]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffc87b3caf0 a2=0 a3=7ffc87b3cadc items=0 ppid=5362 pid=5478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.497000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:21:27.498000 audit[5477]: NETFILTER_CFG table=nat:123 family=2 entries=15 op=nft_register_chain pid=5477 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:21:27.498000 audit[5477]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fffc0845350 a2=0 a3=7fffc084533c items=0 ppid=5362 pid=5477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.498000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:21:27.519000 audit[5476]: NETFILTER_CFG table=raw:124 family=2 entries=21 op=nft_register_chain pid=5476 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:21:27.519000 audit[5476]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffc428ab350 a2=0 a3=7ffc428ab33c items=0 ppid=5362 pid=5476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.519000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:21:27.530000 audit[5483]: NETFILTER_CFG table=filter:125 family=2 entries=94 op=nft_register_chain pid=5483 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:21:27.530000 audit[5483]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffc8d447bc0 a2=0 a3=7ffc8d447bac items=0 ppid=5362 pid=5483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:27.530000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:21:28.748345 containerd[2507]: time="2026-01-14T01:21:28.748233223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nmnvc,Uid:105ea93b-acaa-40b0-85df-5f33aa1485e5,Namespace:kube-system,Attempt:0,}" Jan 14 01:21:28.849807 systemd-networkd[2133]: cali539826599aa: Link UP Jan 14 01:21:28.851419 systemd-networkd[2133]: cali539826599aa: Gained carrier Jan 14 01:21:28.865506 containerd[2507]: 2026-01-14 01:21:28.799 [INFO][5494] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578.0.0--p--9807086b3c-k8s-coredns--66bc5c9577--nmnvc-eth0 coredns-66bc5c9577- kube-system 105ea93b-acaa-40b0-85df-5f33aa1485e5 806 0 2026-01-14 01:20:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4578.0.0-p-9807086b3c coredns-66bc5c9577-nmnvc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali539826599aa [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="bd06e39ef89af593e05b20a22a42f5101feb8e4fa38c51500b8e97a380eae471" Namespace="kube-system" Pod="coredns-66bc5c9577-nmnvc" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-coredns--66bc5c9577--nmnvc-" Jan 14 01:21:28.865506 containerd[2507]: 2026-01-14 01:21:28.799 [INFO][5494] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bd06e39ef89af593e05b20a22a42f5101feb8e4fa38c51500b8e97a380eae471" Namespace="kube-system" Pod="coredns-66bc5c9577-nmnvc" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-coredns--66bc5c9577--nmnvc-eth0" Jan 14 01:21:28.865506 containerd[2507]: 2026-01-14 01:21:28.819 [INFO][5507] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd06e39ef89af593e05b20a22a42f5101feb8e4fa38c51500b8e97a380eae471" HandleID="k8s-pod-network.bd06e39ef89af593e05b20a22a42f5101feb8e4fa38c51500b8e97a380eae471" Workload="ci--4578.0.0--p--9807086b3c-k8s-coredns--66bc5c9577--nmnvc-eth0" Jan 14 01:21:28.865675 containerd[2507]: 2026-01-14 01:21:28.819 [INFO][5507] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bd06e39ef89af593e05b20a22a42f5101feb8e4fa38c51500b8e97a380eae471" HandleID="k8s-pod-network.bd06e39ef89af593e05b20a22a42f5101feb8e4fa38c51500b8e97a380eae471" Workload="ci--4578.0.0--p--9807086b3c-k8s-coredns--66bc5c9577--nmnvc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f5d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4578.0.0-p-9807086b3c", "pod":"coredns-66bc5c9577-nmnvc", "timestamp":"2026-01-14 01:21:28.819351207 +0000 UTC"}, Hostname:"ci-4578.0.0-p-9807086b3c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:21:28.865675 containerd[2507]: 2026-01-14 01:21:28.819 [INFO][5507] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:21:28.865675 containerd[2507]: 2026-01-14 01:21:28.819 [INFO][5507] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:21:28.865675 containerd[2507]: 2026-01-14 01:21:28.819 [INFO][5507] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578.0.0-p-9807086b3c' Jan 14 01:21:28.865675 containerd[2507]: 2026-01-14 01:21:28.824 [INFO][5507] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bd06e39ef89af593e05b20a22a42f5101feb8e4fa38c51500b8e97a380eae471" host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:28.865675 containerd[2507]: 2026-01-14 01:21:28.828 [INFO][5507] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:28.865675 containerd[2507]: 2026-01-14 01:21:28.831 [INFO][5507] ipam/ipam.go 511: Trying affinity for 192.168.63.64/26 host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:28.865675 containerd[2507]: 2026-01-14 01:21:28.833 [INFO][5507] ipam/ipam.go 158: Attempting to load block cidr=192.168.63.64/26 host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:28.865675 containerd[2507]: 2026-01-14 01:21:28.834 [INFO][5507] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.63.64/26 host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:28.865873 containerd[2507]: 2026-01-14 01:21:28.834 [INFO][5507] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.63.64/26 handle="k8s-pod-network.bd06e39ef89af593e05b20a22a42f5101feb8e4fa38c51500b8e97a380eae471" host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:28.865873 containerd[2507]: 2026-01-14 01:21:28.835 [INFO][5507] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bd06e39ef89af593e05b20a22a42f5101feb8e4fa38c51500b8e97a380eae471 Jan 14 01:21:28.865873 containerd[2507]: 2026-01-14 01:21:28.840 [INFO][5507] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.63.64/26 handle="k8s-pod-network.bd06e39ef89af593e05b20a22a42f5101feb8e4fa38c51500b8e97a380eae471" host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:28.865873 containerd[2507]: 2026-01-14 01:21:28.845 [INFO][5507] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.63.66/26] block=192.168.63.64/26 handle="k8s-pod-network.bd06e39ef89af593e05b20a22a42f5101feb8e4fa38c51500b8e97a380eae471" host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:28.865873 containerd[2507]: 2026-01-14 01:21:28.846 [INFO][5507] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.63.66/26] handle="k8s-pod-network.bd06e39ef89af593e05b20a22a42f5101feb8e4fa38c51500b8e97a380eae471" host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:28.865873 containerd[2507]: 2026-01-14 01:21:28.846 [INFO][5507] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:21:28.865873 containerd[2507]: 2026-01-14 01:21:28.846 [INFO][5507] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.63.66/26] IPv6=[] ContainerID="bd06e39ef89af593e05b20a22a42f5101feb8e4fa38c51500b8e97a380eae471" HandleID="k8s-pod-network.bd06e39ef89af593e05b20a22a42f5101feb8e4fa38c51500b8e97a380eae471" Workload="ci--4578.0.0--p--9807086b3c-k8s-coredns--66bc5c9577--nmnvc-eth0" Jan 14 01:21:28.866007 containerd[2507]: 2026-01-14 01:21:28.847 [INFO][5494] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bd06e39ef89af593e05b20a22a42f5101feb8e4fa38c51500b8e97a380eae471" Namespace="kube-system" Pod="coredns-66bc5c9577-nmnvc" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-coredns--66bc5c9577--nmnvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--9807086b3c-k8s-coredns--66bc5c9577--nmnvc-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"105ea93b-acaa-40b0-85df-5f33aa1485e5", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 20, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-9807086b3c", ContainerID:"", Pod:"coredns-66bc5c9577-nmnvc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali539826599aa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:21:28.866007 containerd[2507]: 2026-01-14 01:21:28.847 [INFO][5494] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.63.66/32] ContainerID="bd06e39ef89af593e05b20a22a42f5101feb8e4fa38c51500b8e97a380eae471" Namespace="kube-system" Pod="coredns-66bc5c9577-nmnvc" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-coredns--66bc5c9577--nmnvc-eth0" Jan 14 01:21:28.866007 containerd[2507]: 2026-01-14 01:21:28.847 [INFO][5494] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali539826599aa ContainerID="bd06e39ef89af593e05b20a22a42f5101feb8e4fa38c51500b8e97a380eae471" Namespace="kube-system" Pod="coredns-66bc5c9577-nmnvc" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-coredns--66bc5c9577--nmnvc-eth0" Jan 14 01:21:28.866007 containerd[2507]: 2026-01-14 01:21:28.850 [INFO][5494] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd06e39ef89af593e05b20a22a42f5101feb8e4fa38c51500b8e97a380eae471" Namespace="kube-system" Pod="coredns-66bc5c9577-nmnvc" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-coredns--66bc5c9577--nmnvc-eth0" Jan 14 01:21:28.866007 containerd[2507]: 2026-01-14 01:21:28.851 [INFO][5494] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bd06e39ef89af593e05b20a22a42f5101feb8e4fa38c51500b8e97a380eae471" Namespace="kube-system" Pod="coredns-66bc5c9577-nmnvc" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-coredns--66bc5c9577--nmnvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--9807086b3c-k8s-coredns--66bc5c9577--nmnvc-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"105ea93b-acaa-40b0-85df-5f33aa1485e5", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 20, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-9807086b3c", ContainerID:"bd06e39ef89af593e05b20a22a42f5101feb8e4fa38c51500b8e97a380eae471", Pod:"coredns-66bc5c9577-nmnvc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali539826599aa", MAC:"92:bc:8b:7f:ac:e6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:21:28.866191 containerd[2507]: 2026-01-14 01:21:28.862 [INFO][5494] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bd06e39ef89af593e05b20a22a42f5101feb8e4fa38c51500b8e97a380eae471" Namespace="kube-system" Pod="coredns-66bc5c9577-nmnvc" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-coredns--66bc5c9577--nmnvc-eth0" Jan 14 01:21:28.876000 audit[5521]: NETFILTER_CFG table=filter:126 family=2 entries=42 op=nft_register_chain pid=5521 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:21:28.876000 audit[5521]: SYSCALL arch=c000003e syscall=46 success=yes exit=22552 a0=3 a1=7ffd88001b50 a2=0 a3=7ffd88001b3c items=0 ppid=5362 pid=5521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:28.876000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:21:28.907715 containerd[2507]: time="2026-01-14T01:21:28.907652527Z" level=info msg="connecting to shim bd06e39ef89af593e05b20a22a42f5101feb8e4fa38c51500b8e97a380eae471" address="unix:///run/containerd/s/4aa83e8b80a48730c44f5f536c1e232425e6ab05e3d49f6b851ad70025fdbfcb" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:21:28.933664 systemd[1]: Started cri-containerd-bd06e39ef89af593e05b20a22a42f5101feb8e4fa38c51500b8e97a380eae471.scope - libcontainer container bd06e39ef89af593e05b20a22a42f5101feb8e4fa38c51500b8e97a380eae471. Jan 14 01:21:28.940000 audit: BPF prog-id=235 op=LOAD Jan 14 01:21:28.940000 audit: BPF prog-id=236 op=LOAD Jan 14 01:21:28.940000 audit[5541]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5530 pid=5541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:28.940000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264303665333965663839616635393365303562323061323261343266 Jan 14 01:21:28.940000 audit: BPF prog-id=236 op=UNLOAD Jan 14 01:21:28.940000 audit[5541]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5530 pid=5541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:28.940000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264303665333965663839616635393365303562323061323261343266 Jan 14 01:21:28.941000 audit: BPF prog-id=237 op=LOAD Jan 14 01:21:28.941000 audit[5541]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5530 pid=5541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:28.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264303665333965663839616635393365303562323061323261343266 Jan 14 01:21:28.941000 audit: BPF prog-id=238 op=LOAD Jan 14 01:21:28.941000 audit[5541]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5530 pid=5541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:28.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264303665333965663839616635393365303562323061323261343266 Jan 14 01:21:28.941000 audit: BPF prog-id=238 op=UNLOAD Jan 14 01:21:28.941000 audit[5541]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5530 pid=5541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:28.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264303665333965663839616635393365303562323061323261343266 Jan 14 01:21:28.941000 audit: BPF prog-id=237 op=UNLOAD Jan 14 01:21:28.941000 audit[5541]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5530 pid=5541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:28.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264303665333965663839616635393365303562323061323261343266 Jan 14 01:21:28.941000 audit: BPF prog-id=239 op=LOAD Jan 14 01:21:28.941000 audit[5541]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5530 pid=5541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:28.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6264303665333965663839616635393365303562323061323261343266 Jan 14 01:21:28.974917 containerd[2507]: time="2026-01-14T01:21:28.974883374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nmnvc,Uid:105ea93b-acaa-40b0-85df-5f33aa1485e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd06e39ef89af593e05b20a22a42f5101feb8e4fa38c51500b8e97a380eae471\"" Jan 14 01:21:28.982724 containerd[2507]: time="2026-01-14T01:21:28.982700539Z" level=info msg="CreateContainer within sandbox \"bd06e39ef89af593e05b20a22a42f5101feb8e4fa38c51500b8e97a380eae471\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 01:21:29.005557 containerd[2507]: time="2026-01-14T01:21:29.005065450Z" level=info msg="Container 0c75c1a38a1e1cf16d771b712155d926aed3c269c80117d7ef0954af098c260a: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:21:29.019804 containerd[2507]: time="2026-01-14T01:21:29.019777211Z" level=info msg="CreateContainer within sandbox \"bd06e39ef89af593e05b20a22a42f5101feb8e4fa38c51500b8e97a380eae471\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0c75c1a38a1e1cf16d771b712155d926aed3c269c80117d7ef0954af098c260a\"" Jan 14 01:21:29.021177 containerd[2507]: time="2026-01-14T01:21:29.020179076Z" level=info msg="StartContainer for \"0c75c1a38a1e1cf16d771b712155d926aed3c269c80117d7ef0954af098c260a\"" Jan 14 01:21:29.021177 containerd[2507]: time="2026-01-14T01:21:29.020918177Z" level=info msg="connecting to shim 0c75c1a38a1e1cf16d771b712155d926aed3c269c80117d7ef0954af098c260a" address="unix:///run/containerd/s/4aa83e8b80a48730c44f5f536c1e232425e6ab05e3d49f6b851ad70025fdbfcb" protocol=ttrpc version=3 Jan 14 01:21:29.038647 systemd[1]: Started cri-containerd-0c75c1a38a1e1cf16d771b712155d926aed3c269c80117d7ef0954af098c260a.scope - libcontainer container 0c75c1a38a1e1cf16d771b712155d926aed3c269c80117d7ef0954af098c260a. Jan 14 01:21:29.046000 audit: BPF prog-id=240 op=LOAD Jan 14 01:21:29.046000 audit: BPF prog-id=241 op=LOAD Jan 14 01:21:29.046000 audit[5566]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=5530 pid=5566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:29.046000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063373563316133386131653163663136643737316237313231353564 Jan 14 01:21:29.046000 audit: BPF prog-id=241 op=UNLOAD Jan 14 01:21:29.046000 audit[5566]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5530 pid=5566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:29.046000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063373563316133386131653163663136643737316237313231353564 Jan 14 01:21:29.046000 audit: BPF prog-id=242 op=LOAD Jan 14 01:21:29.046000 audit[5566]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=5530 pid=5566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:29.046000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063373563316133386131653163663136643737316237313231353564 Jan 14 01:21:29.046000 audit: BPF prog-id=243 op=LOAD Jan 14 01:21:29.046000 audit[5566]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=5530 pid=5566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:29.046000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063373563316133386131653163663136643737316237313231353564 Jan 14 01:21:29.046000 audit: BPF prog-id=243 op=UNLOAD Jan 14 01:21:29.046000 audit[5566]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5530 pid=5566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:29.046000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063373563316133386131653163663136643737316237313231353564 Jan 14 01:21:29.046000 audit: BPF prog-id=242 op=UNLOAD Jan 14 01:21:29.046000 audit[5566]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5530 pid=5566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:29.046000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063373563316133386131653163663136643737316237313231353564 Jan 14 01:21:29.046000 audit: BPF prog-id=244 op=LOAD Jan 14 01:21:29.046000 audit[5566]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=5530 pid=5566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:29.046000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063373563316133386131653163663136643737316237313231353564 Jan 14 01:21:29.065590 containerd[2507]: time="2026-01-14T01:21:29.065565602Z" level=info msg="StartContainer for \"0c75c1a38a1e1cf16d771b712155d926aed3c269c80117d7ef0954af098c260a\" returns successfully" Jan 14 01:21:29.347673 systemd-networkd[2133]: vxlan.calico: Gained IPv6LL Jan 14 01:21:29.744629 containerd[2507]: time="2026-01-14T01:21:29.744589765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dc47777bb-hmsh5,Uid:4c98e701-5dc1-42d2-b8b2-315dbbe213e6,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:21:29.748380 containerd[2507]: time="2026-01-14T01:21:29.748334991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dc47777bb-hzl5d,Uid:81ae1783-805d-45cb-a9d3-21a22f1883e1,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:21:29.859865 systemd-networkd[2133]: cali0e64278084b: Link UP Jan 14 01:21:29.859997 systemd-networkd[2133]: cali0e64278084b: Gained carrier Jan 14 01:21:29.872527 containerd[2507]: 2026-01-14 01:21:29.793 [INFO][5600] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578.0.0--p--9807086b3c-k8s-calico--apiserver--dc47777bb--hmsh5-eth0 calico-apiserver-dc47777bb- calico-apiserver 4c98e701-5dc1-42d2-b8b2-315dbbe213e6 811 0 2026-01-14 01:21:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:dc47777bb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4578.0.0-p-9807086b3c calico-apiserver-dc47777bb-hmsh5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0e64278084b [] [] }} ContainerID="2669a7f5eff489ea02cf3634451a9c1da4e87d98558fff820a77c4be6238ba2e" Namespace="calico-apiserver" Pod="calico-apiserver-dc47777bb-hmsh5" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-calico--apiserver--dc47777bb--hmsh5-" Jan 14 01:21:29.872527 containerd[2507]: 2026-01-14 01:21:29.793 [INFO][5600] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2669a7f5eff489ea02cf3634451a9c1da4e87d98558fff820a77c4be6238ba2e" Namespace="calico-apiserver" Pod="calico-apiserver-dc47777bb-hmsh5" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-calico--apiserver--dc47777bb--hmsh5-eth0" Jan 14 01:21:29.872527 containerd[2507]: 2026-01-14 01:21:29.822 [INFO][5623] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2669a7f5eff489ea02cf3634451a9c1da4e87d98558fff820a77c4be6238ba2e" HandleID="k8s-pod-network.2669a7f5eff489ea02cf3634451a9c1da4e87d98558fff820a77c4be6238ba2e" Workload="ci--4578.0.0--p--9807086b3c-k8s-calico--apiserver--dc47777bb--hmsh5-eth0" Jan 14 01:21:29.872527 containerd[2507]: 2026-01-14 01:21:29.822 [INFO][5623] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2669a7f5eff489ea02cf3634451a9c1da4e87d98558fff820a77c4be6238ba2e" HandleID="k8s-pod-network.2669a7f5eff489ea02cf3634451a9c1da4e87d98558fff820a77c4be6238ba2e" Workload="ci--4578.0.0--p--9807086b3c-k8s-calico--apiserver--dc47777bb--hmsh5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5200), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4578.0.0-p-9807086b3c", "pod":"calico-apiserver-dc47777bb-hmsh5", "timestamp":"2026-01-14 01:21:29.822510894 +0000 UTC"}, Hostname:"ci-4578.0.0-p-9807086b3c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:21:29.872527 containerd[2507]: 2026-01-14 01:21:29.823 [INFO][5623] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:21:29.872527 containerd[2507]: 2026-01-14 01:21:29.823 [INFO][5623] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:21:29.872527 containerd[2507]: 2026-01-14 01:21:29.823 [INFO][5623] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578.0.0-p-9807086b3c' Jan 14 01:21:29.872527 containerd[2507]: 2026-01-14 01:21:29.829 [INFO][5623] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2669a7f5eff489ea02cf3634451a9c1da4e87d98558fff820a77c4be6238ba2e" host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:29.872527 containerd[2507]: 2026-01-14 01:21:29.833 [INFO][5623] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:29.872527 containerd[2507]: 2026-01-14 01:21:29.837 [INFO][5623] ipam/ipam.go 511: Trying affinity for 192.168.63.64/26 host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:29.872527 containerd[2507]: 2026-01-14 01:21:29.839 [INFO][5623] ipam/ipam.go 158: Attempting to load block cidr=192.168.63.64/26 host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:29.872527 containerd[2507]: 2026-01-14 01:21:29.840 [INFO][5623] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.63.64/26 host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:29.872527 containerd[2507]: 2026-01-14 01:21:29.841 [INFO][5623] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.63.64/26 handle="k8s-pod-network.2669a7f5eff489ea02cf3634451a9c1da4e87d98558fff820a77c4be6238ba2e" host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:29.872527 containerd[2507]: 2026-01-14 01:21:29.842 [INFO][5623] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2669a7f5eff489ea02cf3634451a9c1da4e87d98558fff820a77c4be6238ba2e Jan 14 01:21:29.872527 containerd[2507]: 2026-01-14 01:21:29.846 [INFO][5623] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.63.64/26 handle="k8s-pod-network.2669a7f5eff489ea02cf3634451a9c1da4e87d98558fff820a77c4be6238ba2e" host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:29.872527 containerd[2507]: 2026-01-14 01:21:29.853 [INFO][5623] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.63.67/26] block=192.168.63.64/26 handle="k8s-pod-network.2669a7f5eff489ea02cf3634451a9c1da4e87d98558fff820a77c4be6238ba2e" host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:29.872527 containerd[2507]: 2026-01-14 01:21:29.853 [INFO][5623] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.63.67/26] handle="k8s-pod-network.2669a7f5eff489ea02cf3634451a9c1da4e87d98558fff820a77c4be6238ba2e" host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:29.872527 containerd[2507]: 2026-01-14 01:21:29.854 [INFO][5623] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:21:29.872527 containerd[2507]: 2026-01-14 01:21:29.854 [INFO][5623] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.63.67/26] IPv6=[] ContainerID="2669a7f5eff489ea02cf3634451a9c1da4e87d98558fff820a77c4be6238ba2e" HandleID="k8s-pod-network.2669a7f5eff489ea02cf3634451a9c1da4e87d98558fff820a77c4be6238ba2e" Workload="ci--4578.0.0--p--9807086b3c-k8s-calico--apiserver--dc47777bb--hmsh5-eth0" Jan 14 01:21:29.873015 containerd[2507]: 2026-01-14 01:21:29.856 [INFO][5600] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2669a7f5eff489ea02cf3634451a9c1da4e87d98558fff820a77c4be6238ba2e" Namespace="calico-apiserver" Pod="calico-apiserver-dc47777bb-hmsh5" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-calico--apiserver--dc47777bb--hmsh5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--9807086b3c-k8s-calico--apiserver--dc47777bb--hmsh5-eth0", GenerateName:"calico-apiserver-dc47777bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"4c98e701-5dc1-42d2-b8b2-315dbbe213e6", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 21, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dc47777bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-9807086b3c", ContainerID:"", Pod:"calico-apiserver-dc47777bb-hmsh5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.63.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0e64278084b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:21:29.873015 containerd[2507]: 2026-01-14 01:21:29.856 [INFO][5600] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.63.67/32] ContainerID="2669a7f5eff489ea02cf3634451a9c1da4e87d98558fff820a77c4be6238ba2e" Namespace="calico-apiserver" Pod="calico-apiserver-dc47777bb-hmsh5" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-calico--apiserver--dc47777bb--hmsh5-eth0" Jan 14 01:21:29.873015 containerd[2507]: 2026-01-14 01:21:29.856 [INFO][5600] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0e64278084b ContainerID="2669a7f5eff489ea02cf3634451a9c1da4e87d98558fff820a77c4be6238ba2e" Namespace="calico-apiserver" Pod="calico-apiserver-dc47777bb-hmsh5" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-calico--apiserver--dc47777bb--hmsh5-eth0" Jan 14 01:21:29.873015 containerd[2507]: 2026-01-14 01:21:29.858 [INFO][5600] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2669a7f5eff489ea02cf3634451a9c1da4e87d98558fff820a77c4be6238ba2e" Namespace="calico-apiserver" Pod="calico-apiserver-dc47777bb-hmsh5" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-calico--apiserver--dc47777bb--hmsh5-eth0" Jan 14 01:21:29.873015 containerd[2507]: 2026-01-14 01:21:29.858 [INFO][5600] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2669a7f5eff489ea02cf3634451a9c1da4e87d98558fff820a77c4be6238ba2e" Namespace="calico-apiserver" Pod="calico-apiserver-dc47777bb-hmsh5" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-calico--apiserver--dc47777bb--hmsh5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--9807086b3c-k8s-calico--apiserver--dc47777bb--hmsh5-eth0", GenerateName:"calico-apiserver-dc47777bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"4c98e701-5dc1-42d2-b8b2-315dbbe213e6", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 21, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dc47777bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-9807086b3c", ContainerID:"2669a7f5eff489ea02cf3634451a9c1da4e87d98558fff820a77c4be6238ba2e", Pod:"calico-apiserver-dc47777bb-hmsh5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.63.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0e64278084b", MAC:"12:eb:0f:09:6f:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:21:29.873015 containerd[2507]: 2026-01-14 01:21:29.870 [INFO][5600] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2669a7f5eff489ea02cf3634451a9c1da4e87d98558fff820a77c4be6238ba2e" Namespace="calico-apiserver" Pod="calico-apiserver-dc47777bb-hmsh5" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-calico--apiserver--dc47777bb--hmsh5-eth0" Jan 14 01:21:29.882000 audit[5647]: NETFILTER_CFG table=filter:127 family=2 entries=54 op=nft_register_chain pid=5647 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:21:29.882000 audit[5647]: SYSCALL arch=c000003e syscall=46 success=yes exit=29396 a0=3 a1=7ffd2b280130 a2=0 a3=7ffd2b28011c items=0 ppid=5362 pid=5647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:29.882000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:21:29.898613 kubelet[4019]: I0114 01:21:29.898553 4019 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-nmnvc" podStartSLOduration=40.898481126 podStartE2EDuration="40.898481126s" podCreationTimestamp="2026-01-14 01:20:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:21:29.897747748 +0000 UTC m=+45.252788125" watchObservedRunningTime="2026-01-14 01:21:29.898481126 +0000 UTC m=+45.253521500" Jan 14 01:21:29.923607 systemd-networkd[2133]: cali539826599aa: Gained IPv6LL Jan 14 01:21:29.929726 containerd[2507]: time="2026-01-14T01:21:29.929560096Z" level=info msg="connecting to shim 2669a7f5eff489ea02cf3634451a9c1da4e87d98558fff820a77c4be6238ba2e" address="unix:///run/containerd/s/2784fc13bae39f3d91fdbd59709757f13f575d95f62d5b1e98fe18585b2e01a9" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:21:29.931000 audit[5650]: NETFILTER_CFG table=filter:128 family=2 entries=20 op=nft_register_rule pid=5650 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:29.931000 audit[5650]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc840cb0a0 a2=0 a3=7ffc840cb08c items=0 ppid=4124 pid=5650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:29.931000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:29.935000 audit[5650]: NETFILTER_CFG table=nat:129 family=2 entries=14 op=nft_register_rule pid=5650 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:29.935000 audit[5650]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc840cb0a0 a2=0 a3=0 items=0 ppid=4124 pid=5650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:29.935000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:29.963742 systemd[1]: Started cri-containerd-2669a7f5eff489ea02cf3634451a9c1da4e87d98558fff820a77c4be6238ba2e.scope - libcontainer container 2669a7f5eff489ea02cf3634451a9c1da4e87d98558fff820a77c4be6238ba2e. Jan 14 01:21:29.987026 systemd-networkd[2133]: calie19d3905334: Link UP Jan 14 01:21:29.988903 systemd-networkd[2133]: calie19d3905334: Gained carrier Jan 14 01:21:30.004000 audit: BPF prog-id=245 op=LOAD Jan 14 01:21:30.005000 audit: BPF prog-id=246 op=LOAD Jan 14 01:21:30.005000 audit[5668]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5657 pid=5668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:30.005000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236363961376635656666343839656130326366333633343435316139 Jan 14 01:21:30.005000 audit: BPF prog-id=246 op=UNLOAD Jan 14 01:21:30.005000 audit[5668]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5657 pid=5668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:30.005000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236363961376635656666343839656130326366333633343435316139 Jan 14 01:21:30.005000 audit: BPF prog-id=247 op=LOAD Jan 14 01:21:30.005000 audit[5668]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5657 pid=5668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:30.005000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236363961376635656666343839656130326366333633343435316139 Jan 14 01:21:30.005000 audit: BPF prog-id=248 op=LOAD Jan 14 01:21:30.005000 audit[5668]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5657 pid=5668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:30.005000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236363961376635656666343839656130326366333633343435316139 Jan 14 01:21:30.006000 audit: BPF prog-id=248 op=UNLOAD Jan 14 01:21:30.006000 audit[5668]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5657 pid=5668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:30.006000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236363961376635656666343839656130326366333633343435316139 Jan 14 01:21:30.006000 audit: BPF prog-id=247 op=UNLOAD Jan 14 01:21:30.006000 audit[5668]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5657 pid=5668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:30.006000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236363961376635656666343839656130326366333633343435316139 Jan 14 01:21:30.006000 audit: BPF prog-id=249 op=LOAD Jan 14 01:21:30.006000 audit[5668]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5657 pid=5668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:30.006000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236363961376635656666343839656130326366333633343435316139 Jan 14 01:21:30.018217 containerd[2507]: 2026-01-14 01:21:29.809 [INFO][5611] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578.0.0--p--9807086b3c-k8s-calico--apiserver--dc47777bb--hzl5d-eth0 calico-apiserver-dc47777bb- calico-apiserver 81ae1783-805d-45cb-a9d3-21a22f1883e1 808 0 2026-01-14 01:21:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:dc47777bb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4578.0.0-p-9807086b3c calico-apiserver-dc47777bb-hzl5d eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie19d3905334 [] [] }} ContainerID="fd2d428027b53c6a61e67881460cec0284b8a0948018f9098cbb5b32243aa0ed" Namespace="calico-apiserver" Pod="calico-apiserver-dc47777bb-hzl5d" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-calico--apiserver--dc47777bb--hzl5d-" Jan 14 01:21:30.018217 containerd[2507]: 2026-01-14 01:21:29.809 [INFO][5611] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fd2d428027b53c6a61e67881460cec0284b8a0948018f9098cbb5b32243aa0ed" Namespace="calico-apiserver" Pod="calico-apiserver-dc47777bb-hzl5d" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-calico--apiserver--dc47777bb--hzl5d-eth0" Jan 14 01:21:30.018217 containerd[2507]: 2026-01-14 01:21:29.841 [INFO][5631] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fd2d428027b53c6a61e67881460cec0284b8a0948018f9098cbb5b32243aa0ed" HandleID="k8s-pod-network.fd2d428027b53c6a61e67881460cec0284b8a0948018f9098cbb5b32243aa0ed" Workload="ci--4578.0.0--p--9807086b3c-k8s-calico--apiserver--dc47777bb--hzl5d-eth0" Jan 14 01:21:30.018217 containerd[2507]: 2026-01-14 01:21:29.842 [INFO][5631] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fd2d428027b53c6a61e67881460cec0284b8a0948018f9098cbb5b32243aa0ed" HandleID="k8s-pod-network.fd2d428027b53c6a61e67881460cec0284b8a0948018f9098cbb5b32243aa0ed" Workload="ci--4578.0.0--p--9807086b3c-k8s-calico--apiserver--dc47777bb--hzl5d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4578.0.0-p-9807086b3c", "pod":"calico-apiserver-dc47777bb-hzl5d", "timestamp":"2026-01-14 01:21:29.84199147 +0000 UTC"}, Hostname:"ci-4578.0.0-p-9807086b3c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:21:30.018217 containerd[2507]: 2026-01-14 01:21:29.842 [INFO][5631] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:21:30.018217 containerd[2507]: 2026-01-14 01:21:29.854 [INFO][5631] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:21:30.018217 containerd[2507]: 2026-01-14 01:21:29.854 [INFO][5631] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578.0.0-p-9807086b3c' Jan 14 01:21:30.018217 containerd[2507]: 2026-01-14 01:21:29.930 [INFO][5631] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fd2d428027b53c6a61e67881460cec0284b8a0948018f9098cbb5b32243aa0ed" host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:30.018217 containerd[2507]: 2026-01-14 01:21:29.945 [INFO][5631] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:30.018217 containerd[2507]: 2026-01-14 01:21:29.950 [INFO][5631] ipam/ipam.go 511: Trying affinity for 192.168.63.64/26 host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:30.018217 containerd[2507]: 2026-01-14 01:21:29.955 [INFO][5631] ipam/ipam.go 158: Attempting to load block cidr=192.168.63.64/26 host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:30.018217 containerd[2507]: 2026-01-14 01:21:29.961 [INFO][5631] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.63.64/26 host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:30.018217 containerd[2507]: 2026-01-14 01:21:29.961 [INFO][5631] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.63.64/26 handle="k8s-pod-network.fd2d428027b53c6a61e67881460cec0284b8a0948018f9098cbb5b32243aa0ed" host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:30.018217 containerd[2507]: 2026-01-14 01:21:29.962 [INFO][5631] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fd2d428027b53c6a61e67881460cec0284b8a0948018f9098cbb5b32243aa0ed Jan 14 01:21:30.018217 containerd[2507]: 2026-01-14 01:21:29.968 [INFO][5631] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.63.64/26 handle="k8s-pod-network.fd2d428027b53c6a61e67881460cec0284b8a0948018f9098cbb5b32243aa0ed" host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:30.018217 containerd[2507]: 2026-01-14 01:21:29.982 [INFO][5631] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.63.68/26] block=192.168.63.64/26 handle="k8s-pod-network.fd2d428027b53c6a61e67881460cec0284b8a0948018f9098cbb5b32243aa0ed" host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:30.018217 containerd[2507]: 2026-01-14 01:21:29.982 [INFO][5631] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.63.68/26] handle="k8s-pod-network.fd2d428027b53c6a61e67881460cec0284b8a0948018f9098cbb5b32243aa0ed" host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:30.018217 containerd[2507]: 2026-01-14 01:21:29.982 [INFO][5631] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:21:30.018217 containerd[2507]: 2026-01-14 01:21:29.982 [INFO][5631] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.63.68/26] IPv6=[] ContainerID="fd2d428027b53c6a61e67881460cec0284b8a0948018f9098cbb5b32243aa0ed" HandleID="k8s-pod-network.fd2d428027b53c6a61e67881460cec0284b8a0948018f9098cbb5b32243aa0ed" Workload="ci--4578.0.0--p--9807086b3c-k8s-calico--apiserver--dc47777bb--hzl5d-eth0" Jan 14 01:21:30.019430 containerd[2507]: 2026-01-14 01:21:29.984 [INFO][5611] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fd2d428027b53c6a61e67881460cec0284b8a0948018f9098cbb5b32243aa0ed" Namespace="calico-apiserver" Pod="calico-apiserver-dc47777bb-hzl5d" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-calico--apiserver--dc47777bb--hzl5d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--9807086b3c-k8s-calico--apiserver--dc47777bb--hzl5d-eth0", GenerateName:"calico-apiserver-dc47777bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"81ae1783-805d-45cb-a9d3-21a22f1883e1", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 21, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dc47777bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-9807086b3c", ContainerID:"", Pod:"calico-apiserver-dc47777bb-hzl5d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.63.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie19d3905334", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:21:30.019430 containerd[2507]: 2026-01-14 01:21:29.984 [INFO][5611] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.63.68/32] ContainerID="fd2d428027b53c6a61e67881460cec0284b8a0948018f9098cbb5b32243aa0ed" Namespace="calico-apiserver" Pod="calico-apiserver-dc47777bb-hzl5d" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-calico--apiserver--dc47777bb--hzl5d-eth0" Jan 14 01:21:30.019430 containerd[2507]: 2026-01-14 01:21:29.984 [INFO][5611] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie19d3905334 ContainerID="fd2d428027b53c6a61e67881460cec0284b8a0948018f9098cbb5b32243aa0ed" Namespace="calico-apiserver" Pod="calico-apiserver-dc47777bb-hzl5d" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-calico--apiserver--dc47777bb--hzl5d-eth0" Jan 14 01:21:30.019430 containerd[2507]: 2026-01-14 01:21:29.986 [INFO][5611] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fd2d428027b53c6a61e67881460cec0284b8a0948018f9098cbb5b32243aa0ed" Namespace="calico-apiserver" Pod="calico-apiserver-dc47777bb-hzl5d" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-calico--apiserver--dc47777bb--hzl5d-eth0" Jan 14 01:21:30.019430 containerd[2507]: 2026-01-14 01:21:29.987 [INFO][5611] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fd2d428027b53c6a61e67881460cec0284b8a0948018f9098cbb5b32243aa0ed" Namespace="calico-apiserver" Pod="calico-apiserver-dc47777bb-hzl5d" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-calico--apiserver--dc47777bb--hzl5d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--9807086b3c-k8s-calico--apiserver--dc47777bb--hzl5d-eth0", GenerateName:"calico-apiserver-dc47777bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"81ae1783-805d-45cb-a9d3-21a22f1883e1", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 21, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"dc47777bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-9807086b3c", ContainerID:"fd2d428027b53c6a61e67881460cec0284b8a0948018f9098cbb5b32243aa0ed", Pod:"calico-apiserver-dc47777bb-hzl5d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.63.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie19d3905334", MAC:"4a:ca:d3:ab:d9:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:21:30.019430 containerd[2507]: 2026-01-14 01:21:30.014 [INFO][5611] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fd2d428027b53c6a61e67881460cec0284b8a0948018f9098cbb5b32243aa0ed" Namespace="calico-apiserver" Pod="calico-apiserver-dc47777bb-hzl5d" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-calico--apiserver--dc47777bb--hzl5d-eth0" Jan 14 01:21:30.034000 audit[5696]: NETFILTER_CFG table=filter:130 family=2 entries=45 op=nft_register_chain pid=5696 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:21:30.034000 audit[5696]: SYSCALL arch=c000003e syscall=46 success=yes exit=24264 a0=3 a1=7ffd5d35e210 a2=0 a3=7ffd5d35e1fc items=0 ppid=5362 pid=5696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:30.034000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:21:30.055327 containerd[2507]: time="2026-01-14T01:21:30.055297397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dc47777bb-hmsh5,Uid:4c98e701-5dc1-42d2-b8b2-315dbbe213e6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2669a7f5eff489ea02cf3634451a9c1da4e87d98558fff820a77c4be6238ba2e\"" Jan 14 01:21:30.056363 containerd[2507]: time="2026-01-14T01:21:30.056337127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:21:30.075519 containerd[2507]: time="2026-01-14T01:21:30.075351136Z" level=info msg="connecting to shim fd2d428027b53c6a61e67881460cec0284b8a0948018f9098cbb5b32243aa0ed" address="unix:///run/containerd/s/7ee614aba2e4eb1bfa11df2a2ae3d818fd7f6fed7ff438de7120bf0aaa9cf4c9" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:21:30.100650 systemd[1]: Started cri-containerd-fd2d428027b53c6a61e67881460cec0284b8a0948018f9098cbb5b32243aa0ed.scope - libcontainer container fd2d428027b53c6a61e67881460cec0284b8a0948018f9098cbb5b32243aa0ed. Jan 14 01:21:30.106000 audit: BPF prog-id=250 op=LOAD Jan 14 01:21:30.106000 audit: BPF prog-id=251 op=LOAD Jan 14 01:21:30.106000 audit[5724]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=5712 pid=5724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:30.106000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664326434323830323762353363366136316536373838313436306365 Jan 14 01:21:30.106000 audit: BPF prog-id=251 op=UNLOAD Jan 14 01:21:30.106000 audit[5724]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5712 pid=5724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:30.106000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664326434323830323762353363366136316536373838313436306365 Jan 14 01:21:30.106000 audit: BPF prog-id=252 op=LOAD Jan 14 01:21:30.106000 audit[5724]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=5712 pid=5724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:30.106000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664326434323830323762353363366136316536373838313436306365 Jan 14 01:21:30.106000 audit: BPF prog-id=253 op=LOAD Jan 14 01:21:30.106000 audit[5724]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=5712 pid=5724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:30.106000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664326434323830323762353363366136316536373838313436306365 Jan 14 01:21:30.107000 audit: BPF prog-id=253 op=UNLOAD Jan 14 01:21:30.107000 audit[5724]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5712 pid=5724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:30.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664326434323830323762353363366136316536373838313436306365 Jan 14 01:21:30.107000 audit: BPF prog-id=252 op=UNLOAD Jan 14 01:21:30.107000 audit[5724]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5712 pid=5724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:30.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664326434323830323762353363366136316536373838313436306365 Jan 14 01:21:30.107000 audit: BPF prog-id=254 op=LOAD Jan 14 01:21:30.107000 audit[5724]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=5712 pid=5724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:30.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664326434323830323762353363366136316536373838313436306365 Jan 14 01:21:30.139459 containerd[2507]: time="2026-01-14T01:21:30.139436163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-dc47777bb-hzl5d,Uid:81ae1783-805d-45cb-a9d3-21a22f1883e1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"fd2d428027b53c6a61e67881460cec0284b8a0948018f9098cbb5b32243aa0ed\"" Jan 14 01:21:30.333828 containerd[2507]: time="2026-01-14T01:21:30.333743440Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:21:30.336960 containerd[2507]: time="2026-01-14T01:21:30.336913652Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:21:30.337056 containerd[2507]: time="2026-01-14T01:21:30.336922567Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:21:30.337147 kubelet[4019]: E0114 01:21:30.337117 4019 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:21:30.337205 kubelet[4019]: E0114 01:21:30.337154 4019 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:21:30.337317 kubelet[4019]: E0114 01:21:30.337289 4019 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-dc47777bb-hmsh5_calico-apiserver(4c98e701-5dc1-42d2-b8b2-315dbbe213e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:21:30.337343 kubelet[4019]: E0114 01:21:30.337323 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dc47777bb-hmsh5" podUID="4c98e701-5dc1-42d2-b8b2-315dbbe213e6" Jan 14 01:21:30.337697 containerd[2507]: time="2026-01-14T01:21:30.337605686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:21:30.604679 containerd[2507]: time="2026-01-14T01:21:30.604590081Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:21:30.607544 containerd[2507]: time="2026-01-14T01:21:30.607514051Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:21:30.607609 containerd[2507]: time="2026-01-14T01:21:30.607575166Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:21:30.607725 kubelet[4019]: E0114 01:21:30.607691 4019 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:21:30.607795 kubelet[4019]: E0114 01:21:30.607729 4019 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:21:30.607910 kubelet[4019]: E0114 01:21:30.607804 4019 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-dc47777bb-hzl5d_calico-apiserver(81ae1783-805d-45cb-a9d3-21a22f1883e1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:21:30.607910 kubelet[4019]: E0114 01:21:30.607836 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dc47777bb-hzl5d" podUID="81ae1783-805d-45cb-a9d3-21a22f1883e1" Jan 14 01:21:30.744670 containerd[2507]: time="2026-01-14T01:21:30.744646696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-q7p7k,Uid:2bd1a43b-e98f-4a1f-8c59-f0c6872188ff,Namespace:calico-system,Attempt:0,}" Jan 14 01:21:30.863913 systemd-networkd[2133]: cali288a7fed3c7: Link UP Jan 14 01:21:30.864094 systemd-networkd[2133]: cali288a7fed3c7: Gained carrier Jan 14 01:21:30.878521 containerd[2507]: 2026-01-14 01:21:30.789 [INFO][5751] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578.0.0--p--9807086b3c-k8s-goldmane--7c778bb748--q7p7k-eth0 goldmane-7c778bb748- calico-system 2bd1a43b-e98f-4a1f-8c59-f0c6872188ff 809 0 2026-01-14 01:21:02 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4578.0.0-p-9807086b3c goldmane-7c778bb748-q7p7k eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali288a7fed3c7 [] [] }} ContainerID="42dba2e360ed55525809ef11b0f409301b37152e46b0e79d1c60ea08821cc32d" Namespace="calico-system" Pod="goldmane-7c778bb748-q7p7k" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-goldmane--7c778bb748--q7p7k-" Jan 14 01:21:30.878521 containerd[2507]: 2026-01-14 01:21:30.789 [INFO][5751] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="42dba2e360ed55525809ef11b0f409301b37152e46b0e79d1c60ea08821cc32d" Namespace="calico-system" Pod="goldmane-7c778bb748-q7p7k" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-goldmane--7c778bb748--q7p7k-eth0" Jan 14 01:21:30.878521 containerd[2507]: 2026-01-14 01:21:30.817 [INFO][5763] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="42dba2e360ed55525809ef11b0f409301b37152e46b0e79d1c60ea08821cc32d" HandleID="k8s-pod-network.42dba2e360ed55525809ef11b0f409301b37152e46b0e79d1c60ea08821cc32d" Workload="ci--4578.0.0--p--9807086b3c-k8s-goldmane--7c778bb748--q7p7k-eth0" Jan 14 01:21:30.878521 containerd[2507]: 2026-01-14 01:21:30.817 [INFO][5763] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="42dba2e360ed55525809ef11b0f409301b37152e46b0e79d1c60ea08821cc32d" HandleID="k8s-pod-network.42dba2e360ed55525809ef11b0f409301b37152e46b0e79d1c60ea08821cc32d" Workload="ci--4578.0.0--p--9807086b3c-k8s-goldmane--7c778bb748--q7p7k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4578.0.0-p-9807086b3c", "pod":"goldmane-7c778bb748-q7p7k", "timestamp":"2026-01-14 01:21:30.817638638 +0000 UTC"}, Hostname:"ci-4578.0.0-p-9807086b3c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:21:30.878521 containerd[2507]: 2026-01-14 01:21:30.817 [INFO][5763] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:21:30.878521 containerd[2507]: 2026-01-14 01:21:30.817 [INFO][5763] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:21:30.878521 containerd[2507]: 2026-01-14 01:21:30.817 [INFO][5763] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578.0.0-p-9807086b3c' Jan 14 01:21:30.878521 containerd[2507]: 2026-01-14 01:21:30.824 [INFO][5763] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.42dba2e360ed55525809ef11b0f409301b37152e46b0e79d1c60ea08821cc32d" host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:30.878521 containerd[2507]: 2026-01-14 01:21:30.829 [INFO][5763] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:30.878521 containerd[2507]: 2026-01-14 01:21:30.834 [INFO][5763] ipam/ipam.go 511: Trying affinity for 192.168.63.64/26 host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:30.878521 containerd[2507]: 2026-01-14 01:21:30.836 [INFO][5763] ipam/ipam.go 158: Attempting to load block cidr=192.168.63.64/26 host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:30.878521 containerd[2507]: 2026-01-14 01:21:30.838 [INFO][5763] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.63.64/26 host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:30.878521 containerd[2507]: 2026-01-14 01:21:30.838 [INFO][5763] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.63.64/26 handle="k8s-pod-network.42dba2e360ed55525809ef11b0f409301b37152e46b0e79d1c60ea08821cc32d" host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:30.878521 containerd[2507]: 2026-01-14 01:21:30.839 [INFO][5763] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.42dba2e360ed55525809ef11b0f409301b37152e46b0e79d1c60ea08821cc32d Jan 14 01:21:30.878521 containerd[2507]: 2026-01-14 01:21:30.848 [INFO][5763] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.63.64/26 handle="k8s-pod-network.42dba2e360ed55525809ef11b0f409301b37152e46b0e79d1c60ea08821cc32d" host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:30.878521 containerd[2507]: 2026-01-14 01:21:30.857 [INFO][5763] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.63.69/26] block=192.168.63.64/26 handle="k8s-pod-network.42dba2e360ed55525809ef11b0f409301b37152e46b0e79d1c60ea08821cc32d" host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:30.878521 containerd[2507]: 2026-01-14 01:21:30.857 [INFO][5763] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.63.69/26] handle="k8s-pod-network.42dba2e360ed55525809ef11b0f409301b37152e46b0e79d1c60ea08821cc32d" host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:30.878521 containerd[2507]: 2026-01-14 01:21:30.857 [INFO][5763] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:21:30.878521 containerd[2507]: 2026-01-14 01:21:30.857 [INFO][5763] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.63.69/26] IPv6=[] ContainerID="42dba2e360ed55525809ef11b0f409301b37152e46b0e79d1c60ea08821cc32d" HandleID="k8s-pod-network.42dba2e360ed55525809ef11b0f409301b37152e46b0e79d1c60ea08821cc32d" Workload="ci--4578.0.0--p--9807086b3c-k8s-goldmane--7c778bb748--q7p7k-eth0" Jan 14 01:21:30.880147 containerd[2507]: 2026-01-14 01:21:30.859 [INFO][5751] cni-plugin/k8s.go 418: Populated endpoint ContainerID="42dba2e360ed55525809ef11b0f409301b37152e46b0e79d1c60ea08821cc32d" Namespace="calico-system" Pod="goldmane-7c778bb748-q7p7k" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-goldmane--7c778bb748--q7p7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--9807086b3c-k8s-goldmane--7c778bb748--q7p7k-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"2bd1a43b-e98f-4a1f-8c59-f0c6872188ff", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 21, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-9807086b3c", ContainerID:"", Pod:"goldmane-7c778bb748-q7p7k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.63.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali288a7fed3c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:21:30.880147 containerd[2507]: 2026-01-14 01:21:30.859 [INFO][5751] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.63.69/32] ContainerID="42dba2e360ed55525809ef11b0f409301b37152e46b0e79d1c60ea08821cc32d" Namespace="calico-system" Pod="goldmane-7c778bb748-q7p7k" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-goldmane--7c778bb748--q7p7k-eth0" Jan 14 01:21:30.880147 containerd[2507]: 2026-01-14 01:21:30.859 [INFO][5751] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali288a7fed3c7 ContainerID="42dba2e360ed55525809ef11b0f409301b37152e46b0e79d1c60ea08821cc32d" Namespace="calico-system" Pod="goldmane-7c778bb748-q7p7k" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-goldmane--7c778bb748--q7p7k-eth0" Jan 14 01:21:30.880147 containerd[2507]: 2026-01-14 01:21:30.862 [INFO][5751] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="42dba2e360ed55525809ef11b0f409301b37152e46b0e79d1c60ea08821cc32d" Namespace="calico-system" Pod="goldmane-7c778bb748-q7p7k" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-goldmane--7c778bb748--q7p7k-eth0" Jan 14 01:21:30.880147 containerd[2507]: 2026-01-14 01:21:30.862 [INFO][5751] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="42dba2e360ed55525809ef11b0f409301b37152e46b0e79d1c60ea08821cc32d" Namespace="calico-system" Pod="goldmane-7c778bb748-q7p7k" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-goldmane--7c778bb748--q7p7k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--9807086b3c-k8s-goldmane--7c778bb748--q7p7k-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"2bd1a43b-e98f-4a1f-8c59-f0c6872188ff", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 21, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-9807086b3c", ContainerID:"42dba2e360ed55525809ef11b0f409301b37152e46b0e79d1c60ea08821cc32d", Pod:"goldmane-7c778bb748-q7p7k", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.63.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali288a7fed3c7", MAC:"b2:3c:45:b8:2c:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:21:30.880147 containerd[2507]: 2026-01-14 01:21:30.876 [INFO][5751] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="42dba2e360ed55525809ef11b0f409301b37152e46b0e79d1c60ea08821cc32d" Namespace="calico-system" Pod="goldmane-7c778bb748-q7p7k" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-goldmane--7c778bb748--q7p7k-eth0" Jan 14 01:21:30.891986 kubelet[4019]: E0114 01:21:30.891865 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dc47777bb-hmsh5" podUID="4c98e701-5dc1-42d2-b8b2-315dbbe213e6" Jan 14 01:21:30.898344 kubelet[4019]: E0114 01:21:30.898291 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dc47777bb-hzl5d" podUID="81ae1783-805d-45cb-a9d3-21a22f1883e1" Jan 14 01:21:30.903000 audit[5779]: NETFILTER_CFG table=filter:131 family=2 entries=56 op=nft_register_chain pid=5779 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:21:30.903000 audit[5779]: SYSCALL arch=c000003e syscall=46 success=yes exit=28744 a0=3 a1=7ffc521d3f60 a2=0 a3=7ffc521d3f4c items=0 ppid=5362 pid=5779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:30.903000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:21:30.936015 containerd[2507]: time="2026-01-14T01:21:30.935915271Z" level=info msg="connecting to shim 42dba2e360ed55525809ef11b0f409301b37152e46b0e79d1c60ea08821cc32d" address="unix:///run/containerd/s/429a5aea3c0057da1663973fd937e7784c9cbbd1b6463634744bcdbf76c4d3e2" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:21:30.960639 systemd[1]: Started cri-containerd-42dba2e360ed55525809ef11b0f409301b37152e46b0e79d1c60ea08821cc32d.scope - libcontainer container 42dba2e360ed55525809ef11b0f409301b37152e46b0e79d1c60ea08821cc32d. Jan 14 01:21:30.969000 audit: BPF prog-id=255 op=LOAD Jan 14 01:21:30.969000 audit: BPF prog-id=256 op=LOAD Jan 14 01:21:30.969000 audit[5800]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5789 pid=5800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:30.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432646261326533363065643535353235383039656631316230663430 Jan 14 01:21:30.969000 audit: BPF prog-id=256 op=UNLOAD Jan 14 01:21:30.969000 audit[5800]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5789 pid=5800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:30.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432646261326533363065643535353235383039656631316230663430 Jan 14 01:21:30.969000 audit: BPF prog-id=257 op=LOAD Jan 14 01:21:30.969000 audit[5800]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5789 pid=5800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:30.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432646261326533363065643535353235383039656631316230663430 Jan 14 01:21:30.969000 audit: BPF prog-id=258 op=LOAD Jan 14 01:21:30.969000 audit[5800]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5789 pid=5800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:30.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432646261326533363065643535353235383039656631316230663430 Jan 14 01:21:30.969000 audit: BPF prog-id=258 op=UNLOAD Jan 14 01:21:30.969000 audit[5800]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5789 pid=5800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:30.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432646261326533363065643535353235383039656631316230663430 Jan 14 01:21:30.969000 audit: BPF prog-id=257 op=UNLOAD Jan 14 01:21:30.969000 audit[5800]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5789 pid=5800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:30.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432646261326533363065643535353235383039656631316230663430 Jan 14 01:21:30.969000 audit: BPF prog-id=259 op=LOAD Jan 14 01:21:30.969000 audit[5800]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5789 pid=5800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:30.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432646261326533363065643535353235383039656631316230663430 Jan 14 01:21:30.975000 audit[5820]: NETFILTER_CFG table=filter:132 family=2 entries=17 op=nft_register_rule pid=5820 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:30.975000 audit[5820]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff88bae090 a2=0 a3=7fff88bae07c items=0 ppid=4124 pid=5820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:30.975000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:30.977000 audit[5820]: NETFILTER_CFG table=nat:133 family=2 entries=35 op=nft_register_chain pid=5820 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:30.977000 audit[5820]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fff88bae090 a2=0 a3=7fff88bae07c items=0 ppid=4124 pid=5820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:30.977000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:31.007879 containerd[2507]: time="2026-01-14T01:21:31.007851463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-q7p7k,Uid:2bd1a43b-e98f-4a1f-8c59-f0c6872188ff,Namespace:calico-system,Attempt:0,} returns sandbox id \"42dba2e360ed55525809ef11b0f409301b37152e46b0e79d1c60ea08821cc32d\"" Jan 14 01:21:31.008927 containerd[2507]: time="2026-01-14T01:21:31.008907840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:21:31.282699 containerd[2507]: time="2026-01-14T01:21:31.282579025Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:21:31.313515 containerd[2507]: time="2026-01-14T01:21:31.313468812Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:21:31.313606 containerd[2507]: time="2026-01-14T01:21:31.313497608Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:21:31.313735 kubelet[4019]: E0114 01:21:31.313703 4019 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:21:31.314020 kubelet[4019]: E0114 01:21:31.313748 4019 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:21:31.314020 kubelet[4019]: E0114 01:21:31.313824 4019 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-q7p7k_calico-system(2bd1a43b-e98f-4a1f-8c59-f0c6872188ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:21:31.314020 kubelet[4019]: E0114 01:21:31.313856 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-q7p7k" podUID="2bd1a43b-e98f-4a1f-8c59-f0c6872188ff" Jan 14 01:21:31.459772 systemd-networkd[2133]: calie19d3905334: Gained IPv6LL Jan 14 01:21:31.746626 containerd[2507]: time="2026-01-14T01:21:31.746590742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f76fbbdcb-fzd7d,Uid:ec043dd0-d5b9-4795-bda8-379dd9ed27d6,Namespace:calico-system,Attempt:0,}" Jan 14 01:21:31.751103 containerd[2507]: time="2026-01-14T01:21:31.751075072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7dnf,Uid:19451c9d-d740-439e-ba98-ce86a4dce532,Namespace:calico-system,Attempt:0,}" Jan 14 01:21:31.758745 containerd[2507]: time="2026-01-14T01:21:31.758687211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ndtsm,Uid:12820403-9657-4768-aa8a-a08f227bfebe,Namespace:kube-system,Attempt:0,}" Jan 14 01:21:31.897921 systemd-networkd[2133]: cali64ebf8fb2f7: Link UP Jan 14 01:21:31.898144 systemd-networkd[2133]: cali64ebf8fb2f7: Gained carrier Jan 14 01:21:31.903466 kubelet[4019]: E0114 01:21:31.902923 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dc47777bb-hzl5d" podUID="81ae1783-805d-45cb-a9d3-21a22f1883e1" Jan 14 01:21:31.903466 kubelet[4019]: E0114 01:21:31.903343 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-q7p7k" podUID="2bd1a43b-e98f-4a1f-8c59-f0c6872188ff" Jan 14 01:21:31.903673 kubelet[4019]: E0114 01:21:31.903532 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dc47777bb-hmsh5" podUID="4c98e701-5dc1-42d2-b8b2-315dbbe213e6" Jan 14 01:21:31.907630 systemd-networkd[2133]: cali0e64278084b: Gained IPv6LL Jan 14 01:21:31.921521 containerd[2507]: 2026-01-14 01:21:31.791 [INFO][5827] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578.0.0--p--9807086b3c-k8s-calico--kube--controllers--7f76fbbdcb--fzd7d-eth0 calico-kube-controllers-7f76fbbdcb- calico-system ec043dd0-d5b9-4795-bda8-379dd9ed27d6 807 0 2026-01-14 01:21:04 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7f76fbbdcb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4578.0.0-p-9807086b3c calico-kube-controllers-7f76fbbdcb-fzd7d eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali64ebf8fb2f7 [] [] }} ContainerID="beed7496451fa154713558874f16df1dbbc1175e840746b89aade3913517a6a8" Namespace="calico-system" Pod="calico-kube-controllers-7f76fbbdcb-fzd7d" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-calico--kube--controllers--7f76fbbdcb--fzd7d-" Jan 14 01:21:31.921521 containerd[2507]: 2026-01-14 01:21:31.792 [INFO][5827] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="beed7496451fa154713558874f16df1dbbc1175e840746b89aade3913517a6a8" Namespace="calico-system" Pod="calico-kube-controllers-7f76fbbdcb-fzd7d" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-calico--kube--controllers--7f76fbbdcb--fzd7d-eth0" Jan 14 01:21:31.921521 containerd[2507]: 2026-01-14 01:21:31.838 [INFO][5847] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="beed7496451fa154713558874f16df1dbbc1175e840746b89aade3913517a6a8" HandleID="k8s-pod-network.beed7496451fa154713558874f16df1dbbc1175e840746b89aade3913517a6a8" Workload="ci--4578.0.0--p--9807086b3c-k8s-calico--kube--controllers--7f76fbbdcb--fzd7d-eth0" Jan 14 01:21:31.921521 containerd[2507]: 2026-01-14 01:21:31.839 [INFO][5847] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="beed7496451fa154713558874f16df1dbbc1175e840746b89aade3913517a6a8" HandleID="k8s-pod-network.beed7496451fa154713558874f16df1dbbc1175e840746b89aade3913517a6a8" Workload="ci--4578.0.0--p--9807086b3c-k8s-calico--kube--controllers--7f76fbbdcb--fzd7d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df8a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4578.0.0-p-9807086b3c", "pod":"calico-kube-controllers-7f76fbbdcb-fzd7d", "timestamp":"2026-01-14 01:21:31.838894322 +0000 UTC"}, Hostname:"ci-4578.0.0-p-9807086b3c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:21:31.921521 containerd[2507]: 2026-01-14 01:21:31.839 [INFO][5847] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:21:31.921521 containerd[2507]: 2026-01-14 01:21:31.839 [INFO][5847] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:21:31.921521 containerd[2507]: 2026-01-14 01:21:31.839 [INFO][5847] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578.0.0-p-9807086b3c' Jan 14 01:21:31.921521 containerd[2507]: 2026-01-14 01:21:31.849 [INFO][5847] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.beed7496451fa154713558874f16df1dbbc1175e840746b89aade3913517a6a8" host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:31.921521 containerd[2507]: 2026-01-14 01:21:31.855 [INFO][5847] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:31.921521 containerd[2507]: 2026-01-14 01:21:31.861 [INFO][5847] ipam/ipam.go 511: Trying affinity for 192.168.63.64/26 host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:31.921521 containerd[2507]: 2026-01-14 01:21:31.864 [INFO][5847] ipam/ipam.go 158: Attempting to load block cidr=192.168.63.64/26 host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:31.921521 containerd[2507]: 2026-01-14 01:21:31.868 [INFO][5847] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.63.64/26 host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:31.921521 containerd[2507]: 2026-01-14 01:21:31.868 [INFO][5847] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.63.64/26 handle="k8s-pod-network.beed7496451fa154713558874f16df1dbbc1175e840746b89aade3913517a6a8" host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:31.921521 containerd[2507]: 2026-01-14 01:21:31.873 [INFO][5847] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.beed7496451fa154713558874f16df1dbbc1175e840746b89aade3913517a6a8 Jan 14 01:21:31.921521 containerd[2507]: 2026-01-14 01:21:31.878 [INFO][5847] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.63.64/26 handle="k8s-pod-network.beed7496451fa154713558874f16df1dbbc1175e840746b89aade3913517a6a8" host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:31.921521 containerd[2507]: 2026-01-14 01:21:31.890 [INFO][5847] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.63.70/26] block=192.168.63.64/26 handle="k8s-pod-network.beed7496451fa154713558874f16df1dbbc1175e840746b89aade3913517a6a8" host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:31.921521 containerd[2507]: 2026-01-14 01:21:31.890 [INFO][5847] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.63.70/26] handle="k8s-pod-network.beed7496451fa154713558874f16df1dbbc1175e840746b89aade3913517a6a8" host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:31.921521 containerd[2507]: 2026-01-14 01:21:31.890 [INFO][5847] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:21:31.921521 containerd[2507]: 2026-01-14 01:21:31.890 [INFO][5847] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.63.70/26] IPv6=[] ContainerID="beed7496451fa154713558874f16df1dbbc1175e840746b89aade3913517a6a8" HandleID="k8s-pod-network.beed7496451fa154713558874f16df1dbbc1175e840746b89aade3913517a6a8" Workload="ci--4578.0.0--p--9807086b3c-k8s-calico--kube--controllers--7f76fbbdcb--fzd7d-eth0" Jan 14 01:21:31.923886 containerd[2507]: 2026-01-14 01:21:31.893 [INFO][5827] cni-plugin/k8s.go 418: Populated endpoint ContainerID="beed7496451fa154713558874f16df1dbbc1175e840746b89aade3913517a6a8" Namespace="calico-system" Pod="calico-kube-controllers-7f76fbbdcb-fzd7d" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-calico--kube--controllers--7f76fbbdcb--fzd7d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--9807086b3c-k8s-calico--kube--controllers--7f76fbbdcb--fzd7d-eth0", GenerateName:"calico-kube-controllers-7f76fbbdcb-", Namespace:"calico-system", SelfLink:"", UID:"ec043dd0-d5b9-4795-bda8-379dd9ed27d6", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 21, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f76fbbdcb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-9807086b3c", ContainerID:"", Pod:"calico-kube-controllers-7f76fbbdcb-fzd7d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.63.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali64ebf8fb2f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:21:31.923886 containerd[2507]: 2026-01-14 01:21:31.893 [INFO][5827] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.63.70/32] ContainerID="beed7496451fa154713558874f16df1dbbc1175e840746b89aade3913517a6a8" Namespace="calico-system" Pod="calico-kube-controllers-7f76fbbdcb-fzd7d" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-calico--kube--controllers--7f76fbbdcb--fzd7d-eth0" Jan 14 01:21:31.923886 containerd[2507]: 2026-01-14 01:21:31.893 [INFO][5827] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali64ebf8fb2f7 ContainerID="beed7496451fa154713558874f16df1dbbc1175e840746b89aade3913517a6a8" Namespace="calico-system" Pod="calico-kube-controllers-7f76fbbdcb-fzd7d" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-calico--kube--controllers--7f76fbbdcb--fzd7d-eth0" Jan 14 01:21:31.923886 containerd[2507]: 2026-01-14 01:21:31.897 [INFO][5827] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="beed7496451fa154713558874f16df1dbbc1175e840746b89aade3913517a6a8" Namespace="calico-system" Pod="calico-kube-controllers-7f76fbbdcb-fzd7d" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-calico--kube--controllers--7f76fbbdcb--fzd7d-eth0" Jan 14 01:21:31.923886 containerd[2507]: 2026-01-14 01:21:31.901 [INFO][5827] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="beed7496451fa154713558874f16df1dbbc1175e840746b89aade3913517a6a8" Namespace="calico-system" Pod="calico-kube-controllers-7f76fbbdcb-fzd7d" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-calico--kube--controllers--7f76fbbdcb--fzd7d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--9807086b3c-k8s-calico--kube--controllers--7f76fbbdcb--fzd7d-eth0", GenerateName:"calico-kube-controllers-7f76fbbdcb-", Namespace:"calico-system", SelfLink:"", UID:"ec043dd0-d5b9-4795-bda8-379dd9ed27d6", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 21, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f76fbbdcb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-9807086b3c", ContainerID:"beed7496451fa154713558874f16df1dbbc1175e840746b89aade3913517a6a8", Pod:"calico-kube-controllers-7f76fbbdcb-fzd7d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.63.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali64ebf8fb2f7", MAC:"2e:52:cc:13:fa:e5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:21:31.923886 containerd[2507]: 2026-01-14 01:21:31.919 [INFO][5827] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="beed7496451fa154713558874f16df1dbbc1175e840746b89aade3913517a6a8" Namespace="calico-system" Pod="calico-kube-controllers-7f76fbbdcb-fzd7d" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-calico--kube--controllers--7f76fbbdcb--fzd7d-eth0" Jan 14 01:21:31.956000 audit[5896]: NETFILTER_CFG table=filter:134 family=2 entries=52 op=nft_register_chain pid=5896 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:21:31.961179 kernel: kauditd_printk_skb: 328 callbacks suppressed Jan 14 01:21:31.961457 kernel: audit: type=1325 audit(1768353691.956:733): table=filter:134 family=2 entries=52 op=nft_register_chain pid=5896 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:21:31.973144 containerd[2507]: time="2026-01-14T01:21:31.973101038Z" level=info msg="connecting to shim beed7496451fa154713558874f16df1dbbc1175e840746b89aade3913517a6a8" address="unix:///run/containerd/s/4ed5e27cc210710556dfc5ffd4ff68fff610bdfeef6a28082db4d83d2c1c5e50" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:21:31.956000 audit[5896]: SYSCALL arch=c000003e syscall=46 success=yes exit=24328 a0=3 a1=7ffc83459e90 a2=0 a3=7ffc83459e7c items=0 ppid=5362 pid=5896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:31.978723 kernel: audit: type=1300 audit(1768353691.956:733): arch=c000003e syscall=46 success=yes exit=24328 a0=3 a1=7ffc83459e90 a2=0 a3=7ffc83459e7c items=0 ppid=5362 pid=5896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:31.978786 kernel: audit: type=1327 audit(1768353691.956:733): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:21:31.956000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:21:32.000000 audit[5925]: NETFILTER_CFG table=filter:135 family=2 entries=14 op=nft_register_rule pid=5925 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:32.005567 kernel: audit: type=1325 audit(1768353692.000:734): table=filter:135 family=2 entries=14 op=nft_register_rule pid=5925 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:32.000000 audit[5925]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffea4860810 a2=0 a3=7ffea48607fc items=0 ppid=4124 pid=5925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:32.012209 kernel: audit: type=1300 audit(1768353692.000:734): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffea4860810 a2=0 a3=7ffea48607fc items=0 ppid=4124 pid=5925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:32.012373 kernel: audit: type=1327 audit(1768353692.000:734): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:32.000000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:32.004000 audit[5925]: NETFILTER_CFG table=nat:136 family=2 entries=20 op=nft_register_rule pid=5925 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:32.021595 kernel: audit: type=1325 audit(1768353692.004:735): table=nat:136 family=2 entries=20 op=nft_register_rule pid=5925 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:32.021656 kernel: audit: type=1300 audit(1768353692.004:735): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffea4860810 a2=0 a3=7ffea48607fc items=0 ppid=4124 pid=5925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:32.004000 audit[5925]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffea4860810 a2=0 a3=7ffea48607fc items=0 ppid=4124 pid=5925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:32.022605 kernel: audit: type=1327 audit(1768353692.004:735): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:32.004000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:32.032060 systemd[1]: Started cri-containerd-beed7496451fa154713558874f16df1dbbc1175e840746b89aade3913517a6a8.scope - libcontainer container beed7496451fa154713558874f16df1dbbc1175e840746b89aade3913517a6a8. Jan 14 01:21:32.032421 systemd-networkd[2133]: calif0eec472167: Link UP Jan 14 01:21:32.035646 systemd-networkd[2133]: calif0eec472167: Gained carrier Jan 14 01:21:32.050267 containerd[2507]: 2026-01-14 01:21:31.852 [INFO][5845] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578.0.0--p--9807086b3c-k8s-csi--node--driver--c7dnf-eth0 csi-node-driver- calico-system 19451c9d-d740-439e-ba98-ce86a4dce532 698 0 2026-01-14 01:21:04 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4578.0.0-p-9807086b3c csi-node-driver-c7dnf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif0eec472167 [] [] }} ContainerID="403d6208ddc47aea6a9a8296c272b7a00451e5615cd5f7b9ec3d121726139d40" Namespace="calico-system" Pod="csi-node-driver-c7dnf" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-csi--node--driver--c7dnf-" Jan 14 01:21:32.050267 containerd[2507]: 2026-01-14 01:21:31.852 [INFO][5845] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="403d6208ddc47aea6a9a8296c272b7a00451e5615cd5f7b9ec3d121726139d40" Namespace="calico-system" Pod="csi-node-driver-c7dnf" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-csi--node--driver--c7dnf-eth0" Jan 14 01:21:32.050267 containerd[2507]: 2026-01-14 01:21:31.888 [INFO][5875] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="403d6208ddc47aea6a9a8296c272b7a00451e5615cd5f7b9ec3d121726139d40" HandleID="k8s-pod-network.403d6208ddc47aea6a9a8296c272b7a00451e5615cd5f7b9ec3d121726139d40" Workload="ci--4578.0.0--p--9807086b3c-k8s-csi--node--driver--c7dnf-eth0" Jan 14 01:21:32.050267 containerd[2507]: 2026-01-14 01:21:31.889 [INFO][5875] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="403d6208ddc47aea6a9a8296c272b7a00451e5615cd5f7b9ec3d121726139d40" HandleID="k8s-pod-network.403d6208ddc47aea6a9a8296c272b7a00451e5615cd5f7b9ec3d121726139d40" Workload="ci--4578.0.0--p--9807086b3c-k8s-csi--node--driver--c7dnf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b8020), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4578.0.0-p-9807086b3c", "pod":"csi-node-driver-c7dnf", "timestamp":"2026-01-14 01:21:31.888970353 +0000 UTC"}, Hostname:"ci-4578.0.0-p-9807086b3c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:21:32.050267 containerd[2507]: 2026-01-14 01:21:31.889 [INFO][5875] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:21:32.050267 containerd[2507]: 2026-01-14 01:21:31.890 [INFO][5875] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:21:32.050267 containerd[2507]: 2026-01-14 01:21:31.890 [INFO][5875] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578.0.0-p-9807086b3c' Jan 14 01:21:32.050267 containerd[2507]: 2026-01-14 01:21:31.949 [INFO][5875] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.403d6208ddc47aea6a9a8296c272b7a00451e5615cd5f7b9ec3d121726139d40" host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:32.050267 containerd[2507]: 2026-01-14 01:21:31.968 [INFO][5875] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:32.050267 containerd[2507]: 2026-01-14 01:21:31.990 [INFO][5875] ipam/ipam.go 511: Trying affinity for 192.168.63.64/26 host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:32.050267 containerd[2507]: 2026-01-14 01:21:31.993 [INFO][5875] ipam/ipam.go 158: Attempting to load block cidr=192.168.63.64/26 host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:32.050267 containerd[2507]: 2026-01-14 01:21:31.997 [INFO][5875] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.63.64/26 host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:32.050267 containerd[2507]: 2026-01-14 01:21:31.999 [INFO][5875] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.63.64/26 handle="k8s-pod-network.403d6208ddc47aea6a9a8296c272b7a00451e5615cd5f7b9ec3d121726139d40" host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:32.050267 containerd[2507]: 2026-01-14 01:21:32.001 [INFO][5875] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.403d6208ddc47aea6a9a8296c272b7a00451e5615cd5f7b9ec3d121726139d40 Jan 14 01:21:32.050267 containerd[2507]: 2026-01-14 01:21:32.014 [INFO][5875] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.63.64/26 handle="k8s-pod-network.403d6208ddc47aea6a9a8296c272b7a00451e5615cd5f7b9ec3d121726139d40" host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:32.050267 containerd[2507]: 2026-01-14 01:21:32.028 [INFO][5875] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.63.71/26] block=192.168.63.64/26 handle="k8s-pod-network.403d6208ddc47aea6a9a8296c272b7a00451e5615cd5f7b9ec3d121726139d40" host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:32.050267 containerd[2507]: 2026-01-14 01:21:32.028 [INFO][5875] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.63.71/26] handle="k8s-pod-network.403d6208ddc47aea6a9a8296c272b7a00451e5615cd5f7b9ec3d121726139d40" host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:32.050267 containerd[2507]: 2026-01-14 01:21:32.028 [INFO][5875] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:21:32.050267 containerd[2507]: 2026-01-14 01:21:32.028 [INFO][5875] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.63.71/26] IPv6=[] ContainerID="403d6208ddc47aea6a9a8296c272b7a00451e5615cd5f7b9ec3d121726139d40" HandleID="k8s-pod-network.403d6208ddc47aea6a9a8296c272b7a00451e5615cd5f7b9ec3d121726139d40" Workload="ci--4578.0.0--p--9807086b3c-k8s-csi--node--driver--c7dnf-eth0" Jan 14 01:21:32.051096 containerd[2507]: 2026-01-14 01:21:32.030 [INFO][5845] cni-plugin/k8s.go 418: Populated endpoint ContainerID="403d6208ddc47aea6a9a8296c272b7a00451e5615cd5f7b9ec3d121726139d40" Namespace="calico-system" Pod="csi-node-driver-c7dnf" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-csi--node--driver--c7dnf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--9807086b3c-k8s-csi--node--driver--c7dnf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"19451c9d-d740-439e-ba98-ce86a4dce532", ResourceVersion:"698", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 21, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-9807086b3c", ContainerID:"", Pod:"csi-node-driver-c7dnf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.63.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif0eec472167", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:21:32.051096 containerd[2507]: 2026-01-14 01:21:32.030 [INFO][5845] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.63.71/32] ContainerID="403d6208ddc47aea6a9a8296c272b7a00451e5615cd5f7b9ec3d121726139d40" Namespace="calico-system" Pod="csi-node-driver-c7dnf" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-csi--node--driver--c7dnf-eth0" Jan 14 01:21:32.051096 containerd[2507]: 2026-01-14 01:21:32.030 [INFO][5845] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif0eec472167 ContainerID="403d6208ddc47aea6a9a8296c272b7a00451e5615cd5f7b9ec3d121726139d40" Namespace="calico-system" Pod="csi-node-driver-c7dnf" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-csi--node--driver--c7dnf-eth0" Jan 14 01:21:32.051096 containerd[2507]: 2026-01-14 01:21:32.035 [INFO][5845] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="403d6208ddc47aea6a9a8296c272b7a00451e5615cd5f7b9ec3d121726139d40" Namespace="calico-system" Pod="csi-node-driver-c7dnf" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-csi--node--driver--c7dnf-eth0" Jan 14 01:21:32.051096 containerd[2507]: 2026-01-14 01:21:32.036 [INFO][5845] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="403d6208ddc47aea6a9a8296c272b7a00451e5615cd5f7b9ec3d121726139d40" Namespace="calico-system" Pod="csi-node-driver-c7dnf" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-csi--node--driver--c7dnf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--9807086b3c-k8s-csi--node--driver--c7dnf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"19451c9d-d740-439e-ba98-ce86a4dce532", ResourceVersion:"698", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 21, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-9807086b3c", ContainerID:"403d6208ddc47aea6a9a8296c272b7a00451e5615cd5f7b9ec3d121726139d40", Pod:"csi-node-driver-c7dnf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.63.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif0eec472167", MAC:"f6:0e:d8:a9:14:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:21:32.051096 containerd[2507]: 2026-01-14 01:21:32.047 [INFO][5845] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="403d6208ddc47aea6a9a8296c272b7a00451e5615cd5f7b9ec3d121726139d40" Namespace="calico-system" Pod="csi-node-driver-c7dnf" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-csi--node--driver--c7dnf-eth0" Jan 14 01:21:32.058000 audit: BPF prog-id=260 op=LOAD Jan 14 01:21:32.060000 audit: BPF prog-id=261 op=LOAD Jan 14 01:21:32.060000 audit[5918]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=5906 pid=5918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:32.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265656437343936343531666131353437313335353838373466313664 Jan 14 01:21:32.060000 audit: BPF prog-id=261 op=UNLOAD Jan 14 01:21:32.060000 audit[5918]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5906 pid=5918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:32.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265656437343936343531666131353437313335353838373466313664 Jan 14 01:21:32.060000 audit: BPF prog-id=262 op=LOAD Jan 14 01:21:32.060000 audit[5918]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=5906 pid=5918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:32.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265656437343936343531666131353437313335353838373466313664 Jan 14 01:21:32.060000 audit: BPF prog-id=263 op=LOAD Jan 14 01:21:32.060000 audit[5918]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=5906 pid=5918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:32.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265656437343936343531666131353437313335353838373466313664 Jan 14 01:21:32.060000 audit: BPF prog-id=263 op=UNLOAD Jan 14 01:21:32.060000 audit[5918]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5906 pid=5918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:32.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265656437343936343531666131353437313335353838373466313664 Jan 14 01:21:32.060000 audit: BPF prog-id=262 op=UNLOAD Jan 14 01:21:32.060000 audit[5918]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5906 pid=5918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:32.062600 kernel: audit: type=1334 audit(1768353692.058:736): prog-id=260 op=LOAD Jan 14 01:21:32.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265656437343936343531666131353437313335353838373466313664 Jan 14 01:21:32.060000 audit: BPF prog-id=264 op=LOAD Jan 14 01:21:32.060000 audit[5918]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=5906 pid=5918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:32.060000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265656437343936343531666131353437313335353838373466313664 Jan 14 01:21:32.076000 audit[5949]: NETFILTER_CFG table=filter:137 family=2 entries=56 op=nft_register_chain pid=5949 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:21:32.076000 audit[5949]: SYSCALL arch=c000003e syscall=46 success=yes exit=25516 a0=3 a1=7ffeb2d45330 a2=0 a3=7ffeb2d4531c items=0 ppid=5362 pid=5949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:32.076000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:21:32.104137 systemd-networkd[2133]: cali630c2adefe2: Link UP Jan 14 01:21:32.105120 systemd-networkd[2133]: cali630c2adefe2: Gained carrier Jan 14 01:21:32.124341 containerd[2507]: time="2026-01-14T01:21:32.123711365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f76fbbdcb-fzd7d,Uid:ec043dd0-d5b9-4795-bda8-379dd9ed27d6,Namespace:calico-system,Attempt:0,} returns sandbox id \"beed7496451fa154713558874f16df1dbbc1175e840746b89aade3913517a6a8\"" Jan 14 01:21:32.126601 containerd[2507]: time="2026-01-14T01:21:32.126240076Z" level=info msg="connecting to shim 403d6208ddc47aea6a9a8296c272b7a00451e5615cd5f7b9ec3d121726139d40" address="unix:///run/containerd/s/f6978115d97ca61071faae9294b6d59e7de7cfb256e9a0523131ceb62522ee6b" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:21:32.130166 containerd[2507]: time="2026-01-14T01:21:32.129559785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:21:32.138519 containerd[2507]: 2026-01-14 01:21:31.854 [INFO][5841] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578.0.0--p--9807086b3c-k8s-coredns--66bc5c9577--ndtsm-eth0 coredns-66bc5c9577- kube-system 12820403-9657-4768-aa8a-a08f227bfebe 813 0 2026-01-14 01:20:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4578.0.0-p-9807086b3c coredns-66bc5c9577-ndtsm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali630c2adefe2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="0918e391baa4e57860a0f409b51bb38a651491b55ea25f0bdec42f52da0cc964" Namespace="kube-system" Pod="coredns-66bc5c9577-ndtsm" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-coredns--66bc5c9577--ndtsm-" Jan 14 01:21:32.138519 containerd[2507]: 2026-01-14 01:21:31.854 [INFO][5841] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0918e391baa4e57860a0f409b51bb38a651491b55ea25f0bdec42f52da0cc964" Namespace="kube-system" Pod="coredns-66bc5c9577-ndtsm" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-coredns--66bc5c9577--ndtsm-eth0" Jan 14 01:21:32.138519 containerd[2507]: 2026-01-14 01:21:31.894 [INFO][5877] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0918e391baa4e57860a0f409b51bb38a651491b55ea25f0bdec42f52da0cc964" HandleID="k8s-pod-network.0918e391baa4e57860a0f409b51bb38a651491b55ea25f0bdec42f52da0cc964" Workload="ci--4578.0.0--p--9807086b3c-k8s-coredns--66bc5c9577--ndtsm-eth0" Jan 14 01:21:32.138519 containerd[2507]: 2026-01-14 01:21:31.894 [INFO][5877] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0918e391baa4e57860a0f409b51bb38a651491b55ea25f0bdec42f52da0cc964" HandleID="k8s-pod-network.0918e391baa4e57860a0f409b51bb38a651491b55ea25f0bdec42f52da0cc964" Workload="ci--4578.0.0--p--9807086b3c-k8s-coredns--66bc5c9577--ndtsm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d59a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4578.0.0-p-9807086b3c", "pod":"coredns-66bc5c9577-ndtsm", "timestamp":"2026-01-14 01:21:31.894274097 +0000 UTC"}, Hostname:"ci-4578.0.0-p-9807086b3c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:21:32.138519 containerd[2507]: 2026-01-14 01:21:31.894 [INFO][5877] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:21:32.138519 containerd[2507]: 2026-01-14 01:21:32.028 [INFO][5877] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:21:32.138519 containerd[2507]: 2026-01-14 01:21:32.028 [INFO][5877] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578.0.0-p-9807086b3c' Jan 14 01:21:32.138519 containerd[2507]: 2026-01-14 01:21:32.051 [INFO][5877] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0918e391baa4e57860a0f409b51bb38a651491b55ea25f0bdec42f52da0cc964" host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:32.138519 containerd[2507]: 2026-01-14 01:21:32.063 [INFO][5877] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:32.138519 containerd[2507]: 2026-01-14 01:21:32.073 [INFO][5877] ipam/ipam.go 511: Trying affinity for 192.168.63.64/26 host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:32.138519 containerd[2507]: 2026-01-14 01:21:32.075 [INFO][5877] ipam/ipam.go 158: Attempting to load block cidr=192.168.63.64/26 host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:32.138519 containerd[2507]: 2026-01-14 01:21:32.077 [INFO][5877] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.63.64/26 host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:32.138519 containerd[2507]: 2026-01-14 01:21:32.078 [INFO][5877] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.63.64/26 handle="k8s-pod-network.0918e391baa4e57860a0f409b51bb38a651491b55ea25f0bdec42f52da0cc964" host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:32.138519 containerd[2507]: 2026-01-14 01:21:32.079 [INFO][5877] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0918e391baa4e57860a0f409b51bb38a651491b55ea25f0bdec42f52da0cc964 Jan 14 01:21:32.138519 containerd[2507]: 2026-01-14 01:21:32.084 [INFO][5877] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.63.64/26 handle="k8s-pod-network.0918e391baa4e57860a0f409b51bb38a651491b55ea25f0bdec42f52da0cc964" host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:32.138519 containerd[2507]: 2026-01-14 01:21:32.098 [INFO][5877] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.63.72/26] block=192.168.63.64/26 handle="k8s-pod-network.0918e391baa4e57860a0f409b51bb38a651491b55ea25f0bdec42f52da0cc964" host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:32.138519 containerd[2507]: 2026-01-14 01:21:32.098 [INFO][5877] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.63.72/26] handle="k8s-pod-network.0918e391baa4e57860a0f409b51bb38a651491b55ea25f0bdec42f52da0cc964" host="ci-4578.0.0-p-9807086b3c" Jan 14 01:21:32.138519 containerd[2507]: 2026-01-14 01:21:32.098 [INFO][5877] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:21:32.138519 containerd[2507]: 2026-01-14 01:21:32.098 [INFO][5877] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.63.72/26] IPv6=[] ContainerID="0918e391baa4e57860a0f409b51bb38a651491b55ea25f0bdec42f52da0cc964" HandleID="k8s-pod-network.0918e391baa4e57860a0f409b51bb38a651491b55ea25f0bdec42f52da0cc964" Workload="ci--4578.0.0--p--9807086b3c-k8s-coredns--66bc5c9577--ndtsm-eth0" Jan 14 01:21:32.139088 containerd[2507]: 2026-01-14 01:21:32.101 [INFO][5841] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0918e391baa4e57860a0f409b51bb38a651491b55ea25f0bdec42f52da0cc964" Namespace="kube-system" Pod="coredns-66bc5c9577-ndtsm" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-coredns--66bc5c9577--ndtsm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--9807086b3c-k8s-coredns--66bc5c9577--ndtsm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"12820403-9657-4768-aa8a-a08f227bfebe", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 20, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-9807086b3c", ContainerID:"", Pod:"coredns-66bc5c9577-ndtsm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali630c2adefe2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:21:32.139088 containerd[2507]: 2026-01-14 01:21:32.101 [INFO][5841] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.63.72/32] ContainerID="0918e391baa4e57860a0f409b51bb38a651491b55ea25f0bdec42f52da0cc964" Namespace="kube-system" Pod="coredns-66bc5c9577-ndtsm" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-coredns--66bc5c9577--ndtsm-eth0" Jan 14 01:21:32.139088 containerd[2507]: 2026-01-14 01:21:32.102 [INFO][5841] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali630c2adefe2 ContainerID="0918e391baa4e57860a0f409b51bb38a651491b55ea25f0bdec42f52da0cc964" Namespace="kube-system" Pod="coredns-66bc5c9577-ndtsm" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-coredns--66bc5c9577--ndtsm-eth0" Jan 14 01:21:32.139088 containerd[2507]: 2026-01-14 01:21:32.119 [INFO][5841] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0918e391baa4e57860a0f409b51bb38a651491b55ea25f0bdec42f52da0cc964" Namespace="kube-system" Pod="coredns-66bc5c9577-ndtsm" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-coredns--66bc5c9577--ndtsm-eth0" Jan 14 01:21:32.139088 containerd[2507]: 2026-01-14 01:21:32.120 [INFO][5841] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0918e391baa4e57860a0f409b51bb38a651491b55ea25f0bdec42f52da0cc964" Namespace="kube-system" Pod="coredns-66bc5c9577-ndtsm" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-coredns--66bc5c9577--ndtsm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578.0.0--p--9807086b3c-k8s-coredns--66bc5c9577--ndtsm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"12820403-9657-4768-aa8a-a08f227bfebe", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 20, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578.0.0-p-9807086b3c", ContainerID:"0918e391baa4e57860a0f409b51bb38a651491b55ea25f0bdec42f52da0cc964", Pod:"coredns-66bc5c9577-ndtsm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali630c2adefe2", MAC:"3e:1e:d2:e1:02:84", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:21:32.139309 containerd[2507]: 2026-01-14 01:21:32.134 [INFO][5841] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0918e391baa4e57860a0f409b51bb38a651491b55ea25f0bdec42f52da0cc964" Namespace="kube-system" Pod="coredns-66bc5c9577-ndtsm" WorkloadEndpoint="ci--4578.0.0--p--9807086b3c-k8s-coredns--66bc5c9577--ndtsm-eth0" Jan 14 01:21:32.156917 systemd[1]: Started cri-containerd-403d6208ddc47aea6a9a8296c272b7a00451e5615cd5f7b9ec3d121726139d40.scope - libcontainer container 403d6208ddc47aea6a9a8296c272b7a00451e5615cd5f7b9ec3d121726139d40. Jan 14 01:21:32.157000 audit[5995]: NETFILTER_CFG table=filter:138 family=2 entries=62 op=nft_register_chain pid=5995 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:21:32.157000 audit[5995]: SYSCALL arch=c000003e syscall=46 success=yes exit=27948 a0=3 a1=7fff10ad65d0 a2=0 a3=7fff10ad65bc items=0 ppid=5362 pid=5995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:32.157000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:21:32.165000 audit: BPF prog-id=265 op=LOAD Jan 14 01:21:32.165000 audit: BPF prog-id=266 op=LOAD Jan 14 01:21:32.165000 audit[5978]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228238 a2=98 a3=0 items=0 ppid=5967 pid=5978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:32.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430336436323038646463343761656136613961383239366332373262 Jan 14 01:21:32.165000 audit: BPF prog-id=266 op=UNLOAD Jan 14 01:21:32.165000 audit[5978]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5967 pid=5978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:32.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430336436323038646463343761656136613961383239366332373262 Jan 14 01:21:32.165000 audit: BPF prog-id=267 op=LOAD Jan 14 01:21:32.165000 audit[5978]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228488 a2=98 a3=0 items=0 ppid=5967 pid=5978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:32.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430336436323038646463343761656136613961383239366332373262 Jan 14 01:21:32.165000 audit: BPF prog-id=268 op=LOAD Jan 14 01:21:32.165000 audit[5978]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000228218 a2=98 a3=0 items=0 ppid=5967 pid=5978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:32.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430336436323038646463343761656136613961383239366332373262 Jan 14 01:21:32.165000 audit: BPF prog-id=268 op=UNLOAD Jan 14 01:21:32.165000 audit[5978]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5967 pid=5978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:32.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430336436323038646463343761656136613961383239366332373262 Jan 14 01:21:32.165000 audit: BPF prog-id=267 op=UNLOAD Jan 14 01:21:32.165000 audit[5978]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5967 pid=5978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:32.165000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430336436323038646463343761656136613961383239366332373262 Jan 14 01:21:32.166000 audit: BPF prog-id=269 op=LOAD Jan 14 01:21:32.166000 audit[5978]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002286e8 a2=98 a3=0 items=0 ppid=5967 pid=5978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:32.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430336436323038646463343761656136613961383239366332373262 Jan 14 01:21:32.183060 containerd[2507]: time="2026-01-14T01:21:32.183035357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7dnf,Uid:19451c9d-d740-439e-ba98-ce86a4dce532,Namespace:calico-system,Attempt:0,} returns sandbox id \"403d6208ddc47aea6a9a8296c272b7a00451e5615cd5f7b9ec3d121726139d40\"" Jan 14 01:21:32.183574 containerd[2507]: time="2026-01-14T01:21:32.183477644Z" level=info msg="connecting to shim 0918e391baa4e57860a0f409b51bb38a651491b55ea25f0bdec42f52da0cc964" address="unix:///run/containerd/s/35d90df5f8144f7955817a060d6bd20272dcac65fbdf1ec4effd066e77c2ee73" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:21:32.203792 systemd[1]: Started cri-containerd-0918e391baa4e57860a0f409b51bb38a651491b55ea25f0bdec42f52da0cc964.scope - libcontainer container 0918e391baa4e57860a0f409b51bb38a651491b55ea25f0bdec42f52da0cc964. Jan 14 01:21:32.215000 audit: BPF prog-id=270 op=LOAD Jan 14 01:21:32.215000 audit: BPF prog-id=271 op=LOAD Jan 14 01:21:32.215000 audit[6030]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=6019 pid=6030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:32.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039313865333931626161346535373836306130663430396235316262 Jan 14 01:21:32.215000 audit: BPF prog-id=271 op=UNLOAD Jan 14 01:21:32.215000 audit[6030]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6019 pid=6030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:32.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039313865333931626161346535373836306130663430396235316262 Jan 14 01:21:32.215000 audit: BPF prog-id=272 op=LOAD Jan 14 01:21:32.215000 audit[6030]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=6019 pid=6030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:32.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039313865333931626161346535373836306130663430396235316262 Jan 14 01:21:32.215000 audit: BPF prog-id=273 op=LOAD Jan 14 01:21:32.215000 audit[6030]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=6019 pid=6030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:32.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039313865333931626161346535373836306130663430396235316262 Jan 14 01:21:32.215000 audit: BPF prog-id=273 op=UNLOAD Jan 14 01:21:32.215000 audit[6030]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6019 pid=6030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:32.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039313865333931626161346535373836306130663430396235316262 Jan 14 01:21:32.215000 audit: BPF prog-id=272 op=UNLOAD Jan 14 01:21:32.215000 audit[6030]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6019 pid=6030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:32.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039313865333931626161346535373836306130663430396235316262 Jan 14 01:21:32.215000 audit: BPF prog-id=274 op=LOAD Jan 14 01:21:32.215000 audit[6030]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=6019 pid=6030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:32.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039313865333931626161346535373836306130663430396235316262 Jan 14 01:21:32.246006 containerd[2507]: time="2026-01-14T01:21:32.245988610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ndtsm,Uid:12820403-9657-4768-aa8a-a08f227bfebe,Namespace:kube-system,Attempt:0,} returns sandbox id \"0918e391baa4e57860a0f409b51bb38a651491b55ea25f0bdec42f52da0cc964\"" Jan 14 01:21:32.253894 containerd[2507]: time="2026-01-14T01:21:32.253837213Z" level=info msg="CreateContainer within sandbox \"0918e391baa4e57860a0f409b51bb38a651491b55ea25f0bdec42f52da0cc964\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 01:21:32.279291 containerd[2507]: time="2026-01-14T01:21:32.279228571Z" level=info msg="Container df0166aeb849ac9909a70cab9e42c6d413533c3e0f6247488b40928751c7c534: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:21:32.298532 containerd[2507]: time="2026-01-14T01:21:32.298466242Z" level=info msg="CreateContainer within sandbox \"0918e391baa4e57860a0f409b51bb38a651491b55ea25f0bdec42f52da0cc964\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"df0166aeb849ac9909a70cab9e42c6d413533c3e0f6247488b40928751c7c534\"" Jan 14 01:21:32.299254 containerd[2507]: time="2026-01-14T01:21:32.299137254Z" level=info msg="StartContainer for \"df0166aeb849ac9909a70cab9e42c6d413533c3e0f6247488b40928751c7c534\"" Jan 14 01:21:32.300282 containerd[2507]: time="2026-01-14T01:21:32.300247409Z" level=info msg="connecting to shim df0166aeb849ac9909a70cab9e42c6d413533c3e0f6247488b40928751c7c534" address="unix:///run/containerd/s/35d90df5f8144f7955817a060d6bd20272dcac65fbdf1ec4effd066e77c2ee73" protocol=ttrpc version=3 Jan 14 01:21:32.317684 systemd[1]: Started cri-containerd-df0166aeb849ac9909a70cab9e42c6d413533c3e0f6247488b40928751c7c534.scope - libcontainer container df0166aeb849ac9909a70cab9e42c6d413533c3e0f6247488b40928751c7c534. Jan 14 01:21:32.325000 audit: BPF prog-id=275 op=LOAD Jan 14 01:21:32.326000 audit: BPF prog-id=276 op=LOAD Jan 14 01:21:32.326000 audit[6055]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=6019 pid=6055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:32.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466303136366165623834396163393930396137306361623965343263 Jan 14 01:21:32.326000 audit: BPF prog-id=276 op=UNLOAD Jan 14 01:21:32.326000 audit[6055]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6019 pid=6055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:32.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466303136366165623834396163393930396137306361623965343263 Jan 14 01:21:32.326000 audit: BPF prog-id=277 op=LOAD Jan 14 01:21:32.326000 audit[6055]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=6019 pid=6055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:32.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466303136366165623834396163393930396137306361623965343263 Jan 14 01:21:32.326000 audit: BPF prog-id=278 op=LOAD Jan 14 01:21:32.326000 audit[6055]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=6019 pid=6055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:32.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466303136366165623834396163393930396137306361623965343263 Jan 14 01:21:32.326000 audit: BPF prog-id=278 op=UNLOAD Jan 14 01:21:32.326000 audit[6055]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6019 pid=6055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:32.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466303136366165623834396163393930396137306361623965343263 Jan 14 01:21:32.326000 audit: BPF prog-id=277 op=UNLOAD Jan 14 01:21:32.326000 audit[6055]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6019 pid=6055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:32.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466303136366165623834396163393930396137306361623965343263 Jan 14 01:21:32.326000 audit: BPF prog-id=279 op=LOAD Jan 14 01:21:32.326000 audit[6055]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=6019 pid=6055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:32.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466303136366165623834396163393930396137306361623965343263 Jan 14 01:21:32.347288 containerd[2507]: time="2026-01-14T01:21:32.347069261Z" level=info msg="StartContainer for \"df0166aeb849ac9909a70cab9e42c6d413533c3e0f6247488b40928751c7c534\" returns successfully" Jan 14 01:21:32.396891 containerd[2507]: time="2026-01-14T01:21:32.396864324Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:21:32.400073 containerd[2507]: time="2026-01-14T01:21:32.400047017Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:21:32.400123 containerd[2507]: time="2026-01-14T01:21:32.400109580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:21:32.400294 kubelet[4019]: E0114 01:21:32.400227 4019 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:21:32.400294 kubelet[4019]: E0114 01:21:32.400264 4019 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:21:32.400584 kubelet[4019]: E0114 01:21:32.400399 4019 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7f76fbbdcb-fzd7d_calico-system(ec043dd0-d5b9-4795-bda8-379dd9ed27d6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:21:32.400584 kubelet[4019]: E0114 01:21:32.400437 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f76fbbdcb-fzd7d" podUID="ec043dd0-d5b9-4795-bda8-379dd9ed27d6" Jan 14 01:21:32.401123 containerd[2507]: time="2026-01-14T01:21:32.401097929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:21:32.483773 systemd-networkd[2133]: cali288a7fed3c7: Gained IPv6LL Jan 14 01:21:32.660605 containerd[2507]: time="2026-01-14T01:21:32.660498652Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:21:32.664630 containerd[2507]: time="2026-01-14T01:21:32.664558149Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:21:32.664729 containerd[2507]: time="2026-01-14T01:21:32.664674506Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:21:32.664945 kubelet[4019]: E0114 01:21:32.664906 4019 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:21:32.664996 kubelet[4019]: E0114 01:21:32.664950 4019 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:21:32.665103 kubelet[4019]: E0114 01:21:32.665076 4019 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-c7dnf_calico-system(19451c9d-d740-439e-ba98-ce86a4dce532): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:21:32.666713 containerd[2507]: time="2026-01-14T01:21:32.666678236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:21:32.903662 kubelet[4019]: E0114 01:21:32.903614 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f76fbbdcb-fzd7d" podUID="ec043dd0-d5b9-4795-bda8-379dd9ed27d6" Jan 14 01:21:32.909068 kubelet[4019]: E0114 01:21:32.908921 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-q7p7k" podUID="2bd1a43b-e98f-4a1f-8c59-f0c6872188ff" Jan 14 01:21:32.926765 containerd[2507]: time="2026-01-14T01:21:32.926607204Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:21:32.932476 containerd[2507]: time="2026-01-14T01:21:32.932427362Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:21:32.932597 containerd[2507]: time="2026-01-14T01:21:32.932521382Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:21:32.932972 kubelet[4019]: E0114 01:21:32.932631 4019 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:21:32.932972 kubelet[4019]: E0114 01:21:32.932660 4019 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:21:32.932972 kubelet[4019]: E0114 01:21:32.932711 4019 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-c7dnf_calico-system(19451c9d-d740-439e-ba98-ce86a4dce532): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:21:32.932972 kubelet[4019]: E0114 01:21:32.932743 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c7dnf" podUID="19451c9d-d740-439e-ba98-ce86a4dce532" Jan 14 01:21:32.955058 kubelet[4019]: I0114 01:21:32.955012 4019 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-ndtsm" podStartSLOduration=43.954997125 podStartE2EDuration="43.954997125s" podCreationTimestamp="2026-01-14 01:20:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:21:32.954885159 +0000 UTC m=+48.309925536" watchObservedRunningTime="2026-01-14 01:21:32.954997125 +0000 UTC m=+48.310037502" Jan 14 01:21:33.025000 audit[6093]: NETFILTER_CFG table=filter:139 family=2 entries=14 op=nft_register_rule pid=6093 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:33.025000 audit[6093]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffeab93260 a2=0 a3=7fffeab9324c items=0 ppid=4124 pid=6093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:33.025000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:33.032000 audit[6093]: NETFILTER_CFG table=nat:140 family=2 entries=56 op=nft_register_chain pid=6093 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:21:33.032000 audit[6093]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fffeab93260 a2=0 a3=7fffeab9324c items=0 ppid=4124 pid=6093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:21:33.032000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:21:33.379658 systemd-networkd[2133]: calif0eec472167: Gained IPv6LL Jan 14 01:21:33.635631 systemd-networkd[2133]: cali64ebf8fb2f7: Gained IPv6LL Jan 14 01:21:33.891773 systemd-networkd[2133]: cali630c2adefe2: Gained IPv6LL Jan 14 01:21:33.911071 kubelet[4019]: E0114 01:21:33.911035 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f76fbbdcb-fzd7d" podUID="ec043dd0-d5b9-4795-bda8-379dd9ed27d6" Jan 14 01:21:33.912646 kubelet[4019]: E0114 01:21:33.912611 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c7dnf" podUID="19451c9d-d740-439e-ba98-ce86a4dce532" Jan 14 01:21:37.740609 containerd[2507]: time="2026-01-14T01:21:37.740554135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:21:37.999352 containerd[2507]: time="2026-01-14T01:21:37.999249590Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:21:38.002915 containerd[2507]: time="2026-01-14T01:21:38.002879307Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:21:38.003071 containerd[2507]: time="2026-01-14T01:21:38.002886879Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:21:38.003110 kubelet[4019]: E0114 01:21:38.003055 4019 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:21:38.003110 kubelet[4019]: E0114 01:21:38.003098 4019 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:21:38.003572 kubelet[4019]: E0114 01:21:38.003174 4019 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-768b846847-hs7j8_calico-system(ce22a880-122c-47be-98f6-68b9053dbdfd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:21:38.004395 containerd[2507]: time="2026-01-14T01:21:38.004366879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:21:38.262512 containerd[2507]: time="2026-01-14T01:21:38.262396540Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:21:38.269943 containerd[2507]: time="2026-01-14T01:21:38.269892240Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:21:38.270029 containerd[2507]: time="2026-01-14T01:21:38.269972559Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:21:38.270155 kubelet[4019]: E0114 01:21:38.270112 4019 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:21:38.270199 kubelet[4019]: E0114 01:21:38.270162 4019 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:21:38.270272 kubelet[4019]: E0114 01:21:38.270255 4019 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-768b846847-hs7j8_calico-system(ce22a880-122c-47be-98f6-68b9053dbdfd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:21:38.270508 kubelet[4019]: E0114 01:21:38.270303 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-768b846847-hs7j8" podUID="ce22a880-122c-47be-98f6-68b9053dbdfd" Jan 14 01:21:43.740083 containerd[2507]: time="2026-01-14T01:21:43.739507287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:21:44.012886 containerd[2507]: time="2026-01-14T01:21:44.012782483Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:21:44.015715 containerd[2507]: time="2026-01-14T01:21:44.015682644Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:21:44.015775 containerd[2507]: time="2026-01-14T01:21:44.015747286Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:21:44.015894 kubelet[4019]: E0114 01:21:44.015857 4019 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:21:44.015894 kubelet[4019]: E0114 01:21:44.015890 4019 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:21:44.016215 kubelet[4019]: E0114 01:21:44.016031 4019 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-q7p7k_calico-system(2bd1a43b-e98f-4a1f-8c59-f0c6872188ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:21:44.016215 kubelet[4019]: E0114 01:21:44.016061 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-q7p7k" podUID="2bd1a43b-e98f-4a1f-8c59-f0c6872188ff" Jan 14 01:21:44.016668 containerd[2507]: time="2026-01-14T01:21:44.016562422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:21:44.277997 containerd[2507]: time="2026-01-14T01:21:44.277896933Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:21:44.283636 containerd[2507]: time="2026-01-14T01:21:44.283608158Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:21:44.283727 containerd[2507]: time="2026-01-14T01:21:44.283670392Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:21:44.283813 kubelet[4019]: E0114 01:21:44.283775 4019 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:21:44.283855 kubelet[4019]: E0114 01:21:44.283821 4019 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:21:44.283912 kubelet[4019]: E0114 01:21:44.283887 4019 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-dc47777bb-hzl5d_calico-apiserver(81ae1783-805d-45cb-a9d3-21a22f1883e1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:21:44.283945 kubelet[4019]: E0114 01:21:44.283927 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dc47777bb-hzl5d" podUID="81ae1783-805d-45cb-a9d3-21a22f1883e1" Jan 14 01:21:45.740221 containerd[2507]: time="2026-01-14T01:21:45.740018799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:21:46.002619 containerd[2507]: time="2026-01-14T01:21:46.002384307Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:21:46.005695 containerd[2507]: time="2026-01-14T01:21:46.005663186Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:21:46.005754 containerd[2507]: time="2026-01-14T01:21:46.005725622Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:21:46.005907 kubelet[4019]: E0114 01:21:46.005839 4019 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:21:46.006154 kubelet[4019]: E0114 01:21:46.005914 4019 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:21:46.006154 kubelet[4019]: E0114 01:21:46.006104 4019 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-dc47777bb-hmsh5_calico-apiserver(4c98e701-5dc1-42d2-b8b2-315dbbe213e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:21:46.006154 kubelet[4019]: E0114 01:21:46.006135 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dc47777bb-hmsh5" podUID="4c98e701-5dc1-42d2-b8b2-315dbbe213e6" Jan 14 01:21:46.006584 containerd[2507]: time="2026-01-14T01:21:46.006380768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:21:46.268283 containerd[2507]: time="2026-01-14T01:21:46.268176813Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:21:46.271497 containerd[2507]: time="2026-01-14T01:21:46.271396061Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:21:46.271497 containerd[2507]: time="2026-01-14T01:21:46.271429564Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:21:46.271642 kubelet[4019]: E0114 01:21:46.271604 4019 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:21:46.271700 kubelet[4019]: E0114 01:21:46.271648 4019 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:21:46.271738 kubelet[4019]: E0114 01:21:46.271718 4019 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7f76fbbdcb-fzd7d_calico-system(ec043dd0-d5b9-4795-bda8-379dd9ed27d6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:21:46.271779 kubelet[4019]: E0114 01:21:46.271757 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f76fbbdcb-fzd7d" podUID="ec043dd0-d5b9-4795-bda8-379dd9ed27d6" Jan 14 01:21:48.741461 containerd[2507]: time="2026-01-14T01:21:48.741420838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:21:49.010416 containerd[2507]: time="2026-01-14T01:21:49.010309711Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:21:49.013346 containerd[2507]: time="2026-01-14T01:21:49.013311743Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:21:49.013425 containerd[2507]: time="2026-01-14T01:21:49.013374347Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:21:49.013570 kubelet[4019]: E0114 01:21:49.013536 4019 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:21:49.013984 kubelet[4019]: E0114 01:21:49.013577 4019 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:21:49.013984 kubelet[4019]: E0114 01:21:49.013642 4019 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-c7dnf_calico-system(19451c9d-d740-439e-ba98-ce86a4dce532): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:21:49.014471 containerd[2507]: time="2026-01-14T01:21:49.014435912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:21:49.278620 containerd[2507]: time="2026-01-14T01:21:49.278533120Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:21:49.283005 containerd[2507]: time="2026-01-14T01:21:49.282957821Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:21:49.283005 containerd[2507]: time="2026-01-14T01:21:49.282982872Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:21:49.283170 kubelet[4019]: E0114 01:21:49.283122 4019 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:21:49.283222 kubelet[4019]: E0114 01:21:49.283178 4019 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:21:49.283300 kubelet[4019]: E0114 01:21:49.283285 4019 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-c7dnf_calico-system(19451c9d-d740-439e-ba98-ce86a4dce532): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:21:49.283557 kubelet[4019]: E0114 01:21:49.283327 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c7dnf" podUID="19451c9d-d740-439e-ba98-ce86a4dce532" Jan 14 01:21:51.740564 kubelet[4019]: E0114 01:21:51.740391 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-768b846847-hs7j8" podUID="ce22a880-122c-47be-98f6-68b9053dbdfd" Jan 14 01:21:57.739755 kubelet[4019]: E0114 01:21:57.739445 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dc47777bb-hzl5d" podUID="81ae1783-805d-45cb-a9d3-21a22f1883e1" Jan 14 01:21:58.743385 kubelet[4019]: E0114 01:21:58.743340 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f76fbbdcb-fzd7d" podUID="ec043dd0-d5b9-4795-bda8-379dd9ed27d6" Jan 14 01:21:58.743818 kubelet[4019]: E0114 01:21:58.743668 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-q7p7k" podUID="2bd1a43b-e98f-4a1f-8c59-f0c6872188ff" Jan 14 01:21:58.743869 kubelet[4019]: E0114 01:21:58.743815 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dc47777bb-hmsh5" podUID="4c98e701-5dc1-42d2-b8b2-315dbbe213e6" Jan 14 01:22:02.743709 containerd[2507]: time="2026-01-14T01:22:02.743178779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:22:02.747013 kubelet[4019]: E0114 01:22:02.746841 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c7dnf" podUID="19451c9d-d740-439e-ba98-ce86a4dce532" Jan 14 01:22:03.006007 containerd[2507]: time="2026-01-14T01:22:03.005891432Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:22:03.009235 containerd[2507]: time="2026-01-14T01:22:03.009188186Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:22:03.009352 containerd[2507]: time="2026-01-14T01:22:03.009195816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:22:03.009431 kubelet[4019]: E0114 01:22:03.009370 4019 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:22:03.009473 kubelet[4019]: E0114 01:22:03.009437 4019 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:22:03.009535 kubelet[4019]: E0114 01:22:03.009517 4019 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-768b846847-hs7j8_calico-system(ce22a880-122c-47be-98f6-68b9053dbdfd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:22:03.010256 containerd[2507]: time="2026-01-14T01:22:03.010234590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:22:03.276675 containerd[2507]: time="2026-01-14T01:22:03.276586179Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:22:03.296221 containerd[2507]: time="2026-01-14T01:22:03.296190982Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:22:03.296313 containerd[2507]: time="2026-01-14T01:22:03.296256699Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:22:03.296404 kubelet[4019]: E0114 01:22:03.296361 4019 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:22:03.296457 kubelet[4019]: E0114 01:22:03.296415 4019 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:22:03.296498 kubelet[4019]: E0114 01:22:03.296475 4019 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-768b846847-hs7j8_calico-system(ce22a880-122c-47be-98f6-68b9053dbdfd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:22:03.296555 kubelet[4019]: E0114 01:22:03.296525 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-768b846847-hs7j8" podUID="ce22a880-122c-47be-98f6-68b9053dbdfd" Jan 14 01:22:08.742934 containerd[2507]: time="2026-01-14T01:22:08.742666498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:22:09.021577 containerd[2507]: time="2026-01-14T01:22:09.020795232Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:22:09.025503 containerd[2507]: time="2026-01-14T01:22:09.025096033Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:22:09.025503 containerd[2507]: time="2026-01-14T01:22:09.025184891Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:22:09.025797 kubelet[4019]: E0114 01:22:09.025756 4019 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:22:09.026505 kubelet[4019]: E0114 01:22:09.026102 4019 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:22:09.026505 kubelet[4019]: E0114 01:22:09.026210 4019 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-dc47777bb-hzl5d_calico-apiserver(81ae1783-805d-45cb-a9d3-21a22f1883e1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:22:09.026653 kubelet[4019]: E0114 01:22:09.026633 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dc47777bb-hzl5d" podUID="81ae1783-805d-45cb-a9d3-21a22f1883e1" Jan 14 01:22:11.741300 containerd[2507]: time="2026-01-14T01:22:11.741240221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:22:12.005448 containerd[2507]: time="2026-01-14T01:22:12.005323745Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:22:12.010030 containerd[2507]: time="2026-01-14T01:22:12.010003132Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:22:12.010098 containerd[2507]: time="2026-01-14T01:22:12.010073177Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:22:12.010272 kubelet[4019]: E0114 01:22:12.010241 4019 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:22:12.010602 kubelet[4019]: E0114 01:22:12.010279 4019 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:22:12.010602 kubelet[4019]: E0114 01:22:12.010351 4019 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-dc47777bb-hmsh5_calico-apiserver(4c98e701-5dc1-42d2-b8b2-315dbbe213e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:22:12.010602 kubelet[4019]: E0114 01:22:12.010382 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dc47777bb-hmsh5" podUID="4c98e701-5dc1-42d2-b8b2-315dbbe213e6" Jan 14 01:22:12.745518 containerd[2507]: time="2026-01-14T01:22:12.743717813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:22:13.013834 containerd[2507]: time="2026-01-14T01:22:13.013724046Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:22:13.019048 containerd[2507]: time="2026-01-14T01:22:13.019014630Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:22:13.019130 containerd[2507]: time="2026-01-14T01:22:13.019075981Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:22:13.019243 kubelet[4019]: E0114 01:22:13.019209 4019 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:22:13.019523 kubelet[4019]: E0114 01:22:13.019248 4019 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:22:13.019523 kubelet[4019]: E0114 01:22:13.019333 4019 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7f76fbbdcb-fzd7d_calico-system(ec043dd0-d5b9-4795-bda8-379dd9ed27d6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:22:13.020234 kubelet[4019]: E0114 01:22:13.019754 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f76fbbdcb-fzd7d" podUID="ec043dd0-d5b9-4795-bda8-379dd9ed27d6" Jan 14 01:22:13.740370 containerd[2507]: time="2026-01-14T01:22:13.740130833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:22:14.005850 containerd[2507]: time="2026-01-14T01:22:14.005644374Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:22:14.010074 containerd[2507]: time="2026-01-14T01:22:14.009993157Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:22:14.010376 containerd[2507]: time="2026-01-14T01:22:14.010116204Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:22:14.010583 kubelet[4019]: E0114 01:22:14.010468 4019 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:22:14.010583 kubelet[4019]: E0114 01:22:14.010555 4019 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:22:14.010875 kubelet[4019]: E0114 01:22:14.010857 4019 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-q7p7k_calico-system(2bd1a43b-e98f-4a1f-8c59-f0c6872188ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:22:14.011101 kubelet[4019]: E0114 01:22:14.011082 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-q7p7k" podUID="2bd1a43b-e98f-4a1f-8c59-f0c6872188ff" Jan 14 01:22:17.741910 containerd[2507]: time="2026-01-14T01:22:17.741820474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:22:17.742889 kubelet[4019]: E0114 01:22:17.742766 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-768b846847-hs7j8" podUID="ce22a880-122c-47be-98f6-68b9053dbdfd" Jan 14 01:22:18.022792 containerd[2507]: time="2026-01-14T01:22:18.022504883Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:22:18.025763 containerd[2507]: time="2026-01-14T01:22:18.025736807Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:22:18.025763 containerd[2507]: time="2026-01-14T01:22:18.025778442Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:22:18.025957 kubelet[4019]: E0114 01:22:18.025888 4019 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:22:18.025999 kubelet[4019]: E0114 01:22:18.025963 4019 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:22:18.026054 kubelet[4019]: E0114 01:22:18.026040 4019 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-c7dnf_calico-system(19451c9d-d740-439e-ba98-ce86a4dce532): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:22:18.026806 containerd[2507]: time="2026-01-14T01:22:18.026746392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:22:18.293170 containerd[2507]: time="2026-01-14T01:22:18.292774312Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:22:18.296220 containerd[2507]: time="2026-01-14T01:22:18.296104550Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:22:18.296220 containerd[2507]: time="2026-01-14T01:22:18.296192749Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:22:18.296573 kubelet[4019]: E0114 01:22:18.296470 4019 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:22:18.296573 kubelet[4019]: E0114 01:22:18.296531 4019 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:22:18.296751 kubelet[4019]: E0114 01:22:18.296729 4019 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-c7dnf_calico-system(19451c9d-d740-439e-ba98-ce86a4dce532): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:22:18.296843 kubelet[4019]: E0114 01:22:18.296821 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c7dnf" podUID="19451c9d-d740-439e-ba98-ce86a4dce532" Jan 14 01:22:23.739668 kubelet[4019]: E0114 01:22:23.739607 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dc47777bb-hzl5d" podUID="81ae1783-805d-45cb-a9d3-21a22f1883e1" Jan 14 01:22:26.743123 kubelet[4019]: E0114 01:22:26.742785 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dc47777bb-hmsh5" podUID="4c98e701-5dc1-42d2-b8b2-315dbbe213e6" Jan 14 01:22:28.742026 kubelet[4019]: E0114 01:22:28.741730 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-q7p7k" podUID="2bd1a43b-e98f-4a1f-8c59-f0c6872188ff" Jan 14 01:22:28.744665 kubelet[4019]: E0114 01:22:28.744625 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c7dnf" podUID="19451c9d-d740-439e-ba98-ce86a4dce532" Jan 14 01:22:28.744815 kubelet[4019]: E0114 01:22:28.744727 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f76fbbdcb-fzd7d" podUID="ec043dd0-d5b9-4795-bda8-379dd9ed27d6" Jan 14 01:22:29.741146 kubelet[4019]: E0114 01:22:29.741098 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-768b846847-hs7j8" podUID="ce22a880-122c-47be-98f6-68b9053dbdfd" Jan 14 01:22:30.946773 systemd[1]: Started sshd@7-10.200.4.7:22-10.200.16.10:38676.service - OpenSSH per-connection server daemon (10.200.16.10:38676). Jan 14 01:22:30.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.4.7:22-10.200.16.10:38676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:30.948253 kernel: kauditd_printk_skb: 99 callbacks suppressed Jan 14 01:22:30.948332 kernel: audit: type=1130 audit(1768353750.946:772): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.4.7:22-10.200.16.10:38676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:31.497000 audit[6187]: USER_ACCT pid=6187 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:31.505506 kernel: audit: type=1101 audit(1768353751.497:773): pid=6187 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:31.505628 sshd[6187]: Accepted publickey for core from 10.200.16.10 port 38676 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:22:31.507285 sshd-session[6187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:31.505000 audit[6187]: CRED_ACQ pid=6187 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:31.514216 kernel: audit: type=1103 audit(1768353751.505:774): pid=6187 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:31.514273 kernel: audit: type=1006 audit(1768353751.505:775): pid=6187 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 14 01:22:31.505000 audit[6187]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf82c8d70 a2=3 a3=0 items=0 ppid=1 pid=6187 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:31.521520 kernel: audit: type=1300 audit(1768353751.505:775): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdf82c8d70 a2=3 a3=0 items=0 ppid=1 pid=6187 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:31.505000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:31.523509 kernel: audit: type=1327 audit(1768353751.505:775): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:31.524999 systemd-logind[2479]: New session 11 of user core. Jan 14 01:22:31.532898 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 14 01:22:31.535000 audit[6187]: USER_START pid=6187 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:31.544168 kernel: audit: type=1105 audit(1768353751.535:776): pid=6187 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:31.544368 kernel: audit: type=1103 audit(1768353751.542:777): pid=6191 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:31.542000 audit[6191]: CRED_ACQ pid=6191 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:31.894328 sshd[6191]: Connection closed by 10.200.16.10 port 38676 Jan 14 01:22:31.895224 sshd-session[6187]: pam_unix(sshd:session): session closed for user core Jan 14 01:22:31.895000 audit[6187]: USER_END pid=6187 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:31.901262 systemd[1]: sshd@7-10.200.4.7:22-10.200.16.10:38676.service: Deactivated successfully. Jan 14 01:22:31.902626 kernel: audit: type=1106 audit(1768353751.895:778): pid=6187 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:31.906526 kernel: audit: type=1104 audit(1768353751.896:779): pid=6187 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:31.896000 audit[6187]: CRED_DISP pid=6187 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:31.905143 systemd[1]: session-11.scope: Deactivated successfully. Jan 14 01:22:31.906164 systemd-logind[2479]: Session 11 logged out. Waiting for processes to exit. Jan 14 01:22:31.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.4.7:22-10.200.16.10:38676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:31.908128 systemd-logind[2479]: Removed session 11. Jan 14 01:22:37.007463 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:22:37.007602 kernel: audit: type=1130 audit(1768353757.004:781): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.4.7:22-10.200.16.10:38692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:37.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.4.7:22-10.200.16.10:38692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:37.005937 systemd[1]: Started sshd@8-10.200.4.7:22-10.200.16.10:38692.service - OpenSSH per-connection server daemon (10.200.16.10:38692). Jan 14 01:22:37.546000 audit[6206]: USER_ACCT pid=6206 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:37.555644 kernel: audit: type=1101 audit(1768353757.546:782): pid=6206 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:37.555734 sshd[6206]: Accepted publickey for core from 10.200.16.10 port 38692 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:22:37.557182 sshd-session[6206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:37.554000 audit[6206]: CRED_ACQ pid=6206 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:37.572212 kernel: audit: type=1103 audit(1768353757.554:783): pid=6206 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:37.572298 kernel: audit: type=1006 audit(1768353757.554:784): pid=6206 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 14 01:22:37.554000 audit[6206]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff7960d560 a2=3 a3=0 items=0 ppid=1 pid=6206 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:37.580611 systemd-logind[2479]: New session 12 of user core. Jan 14 01:22:37.554000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:37.584477 kernel: audit: type=1300 audit(1768353757.554:784): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff7960d560 a2=3 a3=0 items=0 ppid=1 pid=6206 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:37.585567 kernel: audit: type=1327 audit(1768353757.554:784): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:37.585369 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 14 01:22:37.587000 audit[6206]: USER_START pid=6206 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:37.599514 kernel: audit: type=1105 audit(1768353757.587:785): pid=6206 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:37.598000 audit[6210]: CRED_ACQ pid=6210 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:37.607540 kernel: audit: type=1103 audit(1768353757.598:786): pid=6210 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:37.741294 kubelet[4019]: E0114 01:22:37.740985 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dc47777bb-hzl5d" podUID="81ae1783-805d-45cb-a9d3-21a22f1883e1" Jan 14 01:22:37.980403 sshd[6210]: Connection closed by 10.200.16.10 port 38692 Jan 14 01:22:37.980843 sshd-session[6206]: pam_unix(sshd:session): session closed for user core Jan 14 01:22:37.981000 audit[6206]: USER_END pid=6206 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:37.981000 audit[6206]: CRED_DISP pid=6206 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:37.993460 systemd-logind[2479]: Session 12 logged out. Waiting for processes to exit. Jan 14 01:22:37.998123 kernel: audit: type=1106 audit(1768353757.981:787): pid=6206 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:37.998192 kernel: audit: type=1104 audit(1768353757.981:788): pid=6206 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:37.995266 systemd[1]: sshd@8-10.200.4.7:22-10.200.16.10:38692.service: Deactivated successfully. Jan 14 01:22:37.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.4.7:22-10.200.16.10:38692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:37.999794 systemd[1]: session-12.scope: Deactivated successfully. Jan 14 01:22:38.003574 systemd-logind[2479]: Removed session 12. Jan 14 01:22:38.741827 kubelet[4019]: E0114 01:22:38.741526 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dc47777bb-hmsh5" podUID="4c98e701-5dc1-42d2-b8b2-315dbbe213e6" Jan 14 01:22:40.746784 kubelet[4019]: E0114 01:22:40.746540 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c7dnf" podUID="19451c9d-d740-439e-ba98-ce86a4dce532" Jan 14 01:22:42.742821 kubelet[4019]: E0114 01:22:42.742614 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f76fbbdcb-fzd7d" podUID="ec043dd0-d5b9-4795-bda8-379dd9ed27d6" Jan 14 01:22:42.744365 kubelet[4019]: E0114 01:22:42.743206 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-768b846847-hs7j8" podUID="ce22a880-122c-47be-98f6-68b9053dbdfd" Jan 14 01:22:43.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.4.7:22-10.200.16.10:47740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:43.098518 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:22:43.098595 kernel: audit: type=1130 audit(1768353763.094:790): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.4.7:22-10.200.16.10:47740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:43.095528 systemd[1]: Started sshd@9-10.200.4.7:22-10.200.16.10:47740.service - OpenSSH per-connection server daemon (10.200.16.10:47740). Jan 14 01:22:43.635000 audit[6222]: USER_ACCT pid=6222 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:43.639451 sshd-session[6222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:43.636000 audit[6222]: CRED_ACQ pid=6222 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:43.645012 sshd[6222]: Accepted publickey for core from 10.200.16.10 port 47740 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:22:43.649297 kernel: audit: type=1101 audit(1768353763.635:791): pid=6222 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:43.649361 kernel: audit: type=1103 audit(1768353763.636:792): pid=6222 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:43.650563 systemd-logind[2479]: New session 13 of user core. Jan 14 01:22:43.654176 kernel: audit: type=1006 audit(1768353763.636:793): pid=6222 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 14 01:22:43.636000 audit[6222]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff0f01600 a2=3 a3=0 items=0 ppid=1 pid=6222 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:43.667323 kernel: audit: type=1300 audit(1768353763.636:793): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff0f01600 a2=3 a3=0 items=0 ppid=1 pid=6222 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:43.636000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:43.670553 kernel: audit: type=1327 audit(1768353763.636:793): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:43.671687 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 14 01:22:43.673000 audit[6222]: USER_START pid=6222 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:43.676000 audit[6226]: CRED_ACQ pid=6226 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:43.685427 kernel: audit: type=1105 audit(1768353763.673:794): pid=6222 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:43.685472 kernel: audit: type=1103 audit(1768353763.676:795): pid=6226 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:43.740503 kubelet[4019]: E0114 01:22:43.739655 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-q7p7k" podUID="2bd1a43b-e98f-4a1f-8c59-f0c6872188ff" Jan 14 01:22:44.012701 sshd[6226]: Connection closed by 10.200.16.10 port 47740 Jan 14 01:22:44.013631 sshd-session[6222]: pam_unix(sshd:session): session closed for user core Jan 14 01:22:44.013000 audit[6222]: USER_END pid=6222 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:44.013000 audit[6222]: CRED_DISP pid=6222 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:44.021756 systemd[1]: sshd@9-10.200.4.7:22-10.200.16.10:47740.service: Deactivated successfully. Jan 14 01:22:44.023408 kernel: audit: type=1106 audit(1768353764.013:796): pid=6222 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:44.023457 kernel: audit: type=1104 audit(1768353764.013:797): pid=6222 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:44.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.4.7:22-10.200.16.10:47740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:44.025683 systemd[1]: session-13.scope: Deactivated successfully. Jan 14 01:22:44.026997 systemd-logind[2479]: Session 13 logged out. Waiting for processes to exit. Jan 14 01:22:44.027961 systemd-logind[2479]: Removed session 13. Jan 14 01:22:44.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.4.7:22-10.200.16.10:47752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:44.127818 systemd[1]: Started sshd@10-10.200.4.7:22-10.200.16.10:47752.service - OpenSSH per-connection server daemon (10.200.16.10:47752). Jan 14 01:22:44.692000 audit[6239]: USER_ACCT pid=6239 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:44.693298 sshd[6239]: Accepted publickey for core from 10.200.16.10 port 47752 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:22:44.693000 audit[6239]: CRED_ACQ pid=6239 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:44.693000 audit[6239]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef2d5f220 a2=3 a3=0 items=0 ppid=1 pid=6239 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:44.693000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:44.694425 sshd-session[6239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:44.698700 systemd-logind[2479]: New session 14 of user core. Jan 14 01:22:44.704666 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 14 01:22:44.706000 audit[6239]: USER_START pid=6239 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:44.707000 audit[6243]: CRED_ACQ pid=6243 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:45.070126 sshd[6243]: Connection closed by 10.200.16.10 port 47752 Jan 14 01:22:45.073640 sshd-session[6239]: pam_unix(sshd:session): session closed for user core Jan 14 01:22:45.074000 audit[6239]: USER_END pid=6239 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:45.074000 audit[6239]: CRED_DISP pid=6239 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:45.077654 systemd-logind[2479]: Session 14 logged out. Waiting for processes to exit. Jan 14 01:22:45.078846 systemd[1]: sshd@10-10.200.4.7:22-10.200.16.10:47752.service: Deactivated successfully. Jan 14 01:22:45.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.4.7:22-10.200.16.10:47752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:45.083579 systemd[1]: session-14.scope: Deactivated successfully. Jan 14 01:22:45.085333 systemd-logind[2479]: Removed session 14. Jan 14 01:22:45.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.4.7:22-10.200.16.10:47764 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:45.184085 systemd[1]: Started sshd@11-10.200.4.7:22-10.200.16.10:47764.service - OpenSSH per-connection server daemon (10.200.16.10:47764). Jan 14 01:22:45.718000 audit[6255]: USER_ACCT pid=6255 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:45.719235 sshd[6255]: Accepted publickey for core from 10.200.16.10 port 47764 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:22:45.720000 audit[6255]: CRED_ACQ pid=6255 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:45.720000 audit[6255]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd7a0f17e0 a2=3 a3=0 items=0 ppid=1 pid=6255 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:45.720000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:45.723351 sshd-session[6255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:45.732555 systemd-logind[2479]: New session 15 of user core. Jan 14 01:22:45.735668 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 14 01:22:45.741000 audit[6255]: USER_START pid=6255 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:45.744000 audit[6259]: CRED_ACQ pid=6259 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:46.092049 sshd[6259]: Connection closed by 10.200.16.10 port 47764 Jan 14 01:22:46.092639 sshd-session[6255]: pam_unix(sshd:session): session closed for user core Jan 14 01:22:46.093000 audit[6255]: USER_END pid=6255 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:46.094000 audit[6255]: CRED_DISP pid=6255 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:46.097352 systemd[1]: sshd@11-10.200.4.7:22-10.200.16.10:47764.service: Deactivated successfully. Jan 14 01:22:46.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.4.7:22-10.200.16.10:47764 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:46.099728 systemd[1]: session-15.scope: Deactivated successfully. Jan 14 01:22:46.100103 systemd-logind[2479]: Session 15 logged out. Waiting for processes to exit. Jan 14 01:22:46.104670 systemd-logind[2479]: Removed session 15. Jan 14 01:22:49.740060 kubelet[4019]: E0114 01:22:49.740014 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dc47777bb-hmsh5" podUID="4c98e701-5dc1-42d2-b8b2-315dbbe213e6" Jan 14 01:22:51.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.4.7:22-10.200.16.10:50390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:51.212809 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 14 01:22:51.212860 kernel: audit: type=1130 audit(1768353771.211:817): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.4.7:22-10.200.16.10:50390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:51.211588 systemd[1]: Started sshd@12-10.200.4.7:22-10.200.16.10:50390.service - OpenSSH per-connection server daemon (10.200.16.10:50390). Jan 14 01:22:51.741024 containerd[2507]: time="2026-01-14T01:22:51.740910037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:22:51.758000 audit[6283]: USER_ACCT pid=6283 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:51.763528 kernel: audit: type=1101 audit(1768353771.758:818): pid=6283 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:51.762725 sshd-session[6283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:51.763792 sshd[6283]: Accepted publickey for core from 10.200.16.10 port 50390 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:22:51.761000 audit[6283]: CRED_ACQ pid=6283 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:51.771265 kernel: audit: type=1103 audit(1768353771.761:819): pid=6283 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:51.771417 kernel: audit: type=1006 audit(1768353771.761:820): pid=6283 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 14 01:22:51.771537 systemd-logind[2479]: New session 16 of user core. Jan 14 01:22:51.761000 audit[6283]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff80775750 a2=3 a3=0 items=0 ppid=1 pid=6283 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:51.776456 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 14 01:22:51.779874 kernel: audit: type=1300 audit(1768353771.761:820): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff80775750 a2=3 a3=0 items=0 ppid=1 pid=6283 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:51.761000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:51.781641 kernel: audit: type=1327 audit(1768353771.761:820): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:51.782000 audit[6283]: USER_START pid=6283 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:51.786179 kernel: audit: type=1105 audit(1768353771.782:821): pid=6283 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:51.785000 audit[6287]: CRED_ACQ pid=6287 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:51.790545 kernel: audit: type=1103 audit(1768353771.785:822): pid=6287 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:52.025071 containerd[2507]: time="2026-01-14T01:22:52.024885008Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:22:52.044080 containerd[2507]: time="2026-01-14T01:22:52.044042306Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:22:52.044196 containerd[2507]: time="2026-01-14T01:22:52.044131197Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:22:52.044270 kubelet[4019]: E0114 01:22:52.044242 4019 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:22:52.044524 kubelet[4019]: E0114 01:22:52.044283 4019 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:22:52.044524 kubelet[4019]: E0114 01:22:52.044351 4019 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-dc47777bb-hzl5d_calico-apiserver(81ae1783-805d-45cb-a9d3-21a22f1883e1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:22:52.044524 kubelet[4019]: E0114 01:22:52.044379 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dc47777bb-hzl5d" podUID="81ae1783-805d-45cb-a9d3-21a22f1883e1" Jan 14 01:22:52.133347 sshd[6287]: Connection closed by 10.200.16.10 port 50390 Jan 14 01:22:52.135344 sshd-session[6283]: pam_unix(sshd:session): session closed for user core Jan 14 01:22:52.145084 kernel: audit: type=1106 audit(1768353772.136:823): pid=6283 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:52.136000 audit[6283]: USER_END pid=6283 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:52.146922 systemd[1]: sshd@12-10.200.4.7:22-10.200.16.10:50390.service: Deactivated successfully. Jan 14 01:22:52.136000 audit[6283]: CRED_DISP pid=6283 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:52.150944 systemd-logind[2479]: Session 16 logged out. Waiting for processes to exit. Jan 14 01:22:52.151583 kernel: audit: type=1104 audit(1768353772.136:824): pid=6283 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:52.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.4.7:22-10.200.16.10:50390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:52.153049 systemd[1]: session-16.scope: Deactivated successfully. Jan 14 01:22:52.156554 systemd-logind[2479]: Removed session 16. Jan 14 01:22:55.741338 containerd[2507]: time="2026-01-14T01:22:55.741295825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:22:55.742398 kubelet[4019]: E0114 01:22:55.742351 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c7dnf" podUID="19451c9d-d740-439e-ba98-ce86a4dce532" Jan 14 01:22:56.022059 containerd[2507]: time="2026-01-14T01:22:56.021936185Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:22:56.026034 containerd[2507]: time="2026-01-14T01:22:56.025988236Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:22:56.026103 containerd[2507]: time="2026-01-14T01:22:56.025990067Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:22:56.026246 kubelet[4019]: E0114 01:22:56.026203 4019 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:22:56.026333 kubelet[4019]: E0114 01:22:56.026257 4019 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:22:56.026446 kubelet[4019]: E0114 01:22:56.026329 4019 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7f76fbbdcb-fzd7d_calico-system(ec043dd0-d5b9-4795-bda8-379dd9ed27d6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:22:56.026446 kubelet[4019]: E0114 01:22:56.026362 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f76fbbdcb-fzd7d" podUID="ec043dd0-d5b9-4795-bda8-379dd9ed27d6" Jan 14 01:22:57.248250 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:22:57.248355 kernel: audit: type=1130 audit(1768353777.245:826): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.4.7:22-10.200.16.10:50406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:57.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.4.7:22-10.200.16.10:50406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:57.247169 systemd[1]: Started sshd@13-10.200.4.7:22-10.200.16.10:50406.service - OpenSSH per-connection server daemon (10.200.16.10:50406). Jan 14 01:22:57.741327 containerd[2507]: time="2026-01-14T01:22:57.741287832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:22:57.794000 audit[6325]: USER_ACCT pid=6325 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:57.799510 sshd-session[6325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:57.803738 kernel: audit: type=1101 audit(1768353777.794:827): pid=6325 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:57.803775 sshd[6325]: Accepted publickey for core from 10.200.16.10 port 50406 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:22:57.797000 audit[6325]: CRED_ACQ pid=6325 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:57.813619 kernel: audit: type=1103 audit(1768353777.797:828): pid=6325 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:57.813272 systemd-logind[2479]: New session 17 of user core. Jan 14 01:22:57.814758 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 14 01:22:57.797000 audit[6325]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd68d9a520 a2=3 a3=0 items=0 ppid=1 pid=6325 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:57.827282 kernel: audit: type=1006 audit(1768353777.797:829): pid=6325 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 14 01:22:57.827346 kernel: audit: type=1300 audit(1768353777.797:829): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd68d9a520 a2=3 a3=0 items=0 ppid=1 pid=6325 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:57.797000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:57.822000 audit[6325]: USER_START pid=6325 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:57.839149 kernel: audit: type=1327 audit(1768353777.797:829): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:57.839210 kernel: audit: type=1105 audit(1768353777.822:830): pid=6325 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:57.829000 audit[6329]: CRED_ACQ pid=6329 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:57.845958 kernel: audit: type=1103 audit(1768353777.829:831): pid=6329 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:58.023339 containerd[2507]: time="2026-01-14T01:22:58.023158188Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:22:58.035426 containerd[2507]: time="2026-01-14T01:22:58.035307260Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:22:58.035426 containerd[2507]: time="2026-01-14T01:22:58.035399988Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:22:58.035699 kubelet[4019]: E0114 01:22:58.035654 4019 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:22:58.036400 kubelet[4019]: E0114 01:22:58.035956 4019 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:22:58.036400 kubelet[4019]: E0114 01:22:58.036055 4019 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-768b846847-hs7j8_calico-system(ce22a880-122c-47be-98f6-68b9053dbdfd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:22:58.037548 containerd[2507]: time="2026-01-14T01:22:58.037526852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:22:58.152566 sshd[6329]: Connection closed by 10.200.16.10 port 50406 Jan 14 01:22:58.153631 sshd-session[6325]: pam_unix(sshd:session): session closed for user core Jan 14 01:22:58.153000 audit[6325]: USER_END pid=6325 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:58.160079 systemd[1]: sshd@13-10.200.4.7:22-10.200.16.10:50406.service: Deactivated successfully. Jan 14 01:22:58.153000 audit[6325]: CRED_DISP pid=6325 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:58.164135 kernel: audit: type=1106 audit(1768353778.153:832): pid=6325 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:58.164248 kernel: audit: type=1104 audit(1768353778.153:833): pid=6325 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:22:58.165091 systemd[1]: session-17.scope: Deactivated successfully. Jan 14 01:22:58.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.4.7:22-10.200.16.10:50406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:58.167830 systemd-logind[2479]: Session 17 logged out. Waiting for processes to exit. Jan 14 01:22:58.168521 systemd-logind[2479]: Removed session 17. Jan 14 01:22:58.309316 containerd[2507]: time="2026-01-14T01:22:58.309104060Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:22:58.315603 containerd[2507]: time="2026-01-14T01:22:58.315556482Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:22:58.315709 containerd[2507]: time="2026-01-14T01:22:58.315653798Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:22:58.315882 kubelet[4019]: E0114 01:22:58.315820 4019 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:22:58.315928 kubelet[4019]: E0114 01:22:58.315887 4019 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:22:58.316958 kubelet[4019]: E0114 01:22:58.315985 4019 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-768b846847-hs7j8_calico-system(ce22a880-122c-47be-98f6-68b9053dbdfd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:22:58.316958 kubelet[4019]: E0114 01:22:58.316029 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-768b846847-hs7j8" podUID="ce22a880-122c-47be-98f6-68b9053dbdfd" Jan 14 01:22:58.741409 containerd[2507]: time="2026-01-14T01:22:58.741167114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:22:59.002611 containerd[2507]: time="2026-01-14T01:22:59.002480547Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:22:59.006899 containerd[2507]: time="2026-01-14T01:22:59.006857910Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:22:59.007033 containerd[2507]: time="2026-01-14T01:22:59.006938042Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:22:59.007083 kubelet[4019]: E0114 01:22:59.007047 4019 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:22:59.007122 kubelet[4019]: E0114 01:22:59.007086 4019 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:22:59.007177 kubelet[4019]: E0114 01:22:59.007157 4019 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-q7p7k_calico-system(2bd1a43b-e98f-4a1f-8c59-f0c6872188ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:22:59.007227 kubelet[4019]: E0114 01:22:59.007194 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-q7p7k" podUID="2bd1a43b-e98f-4a1f-8c59-f0c6872188ff" Jan 14 01:23:03.267870 systemd[1]: Started sshd@14-10.200.4.7:22-10.200.16.10:54468.service - OpenSSH per-connection server daemon (10.200.16.10:54468). Jan 14 01:23:03.273854 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:23:03.273931 kernel: audit: type=1130 audit(1768353783.266:835): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.4.7:22-10.200.16.10:54468 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:03.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.4.7:22-10.200.16.10:54468 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:03.741167 kubelet[4019]: E0114 01:23:03.740875 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dc47777bb-hzl5d" podUID="81ae1783-805d-45cb-a9d3-21a22f1883e1" Jan 14 01:23:03.825000 audit[6348]: USER_ACCT pid=6348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:03.827200 sshd[6348]: Accepted publickey for core from 10.200.16.10 port 54468 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:23:03.829369 sshd-session[6348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:23:03.836558 kernel: audit: type=1101 audit(1768353783.825:836): pid=6348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:03.836734 kernel: audit: type=1103 audit(1768353783.826:837): pid=6348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:03.826000 audit[6348]: CRED_ACQ pid=6348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:03.840503 kernel: audit: type=1006 audit(1768353783.827:838): pid=6348 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 14 01:23:03.843228 systemd-logind[2479]: New session 18 of user core. Jan 14 01:23:03.827000 audit[6348]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce1fdd780 a2=3 a3=0 items=0 ppid=1 pid=6348 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:03.827000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:23:03.852633 kernel: audit: type=1300 audit(1768353783.827:838): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce1fdd780 a2=3 a3=0 items=0 ppid=1 pid=6348 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:03.852693 kernel: audit: type=1327 audit(1768353783.827:838): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:23:03.853758 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 14 01:23:03.854000 audit[6348]: USER_START pid=6348 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:03.859000 audit[6352]: CRED_ACQ pid=6352 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:03.864321 kernel: audit: type=1105 audit(1768353783.854:839): pid=6348 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:03.864367 kernel: audit: type=1103 audit(1768353783.859:840): pid=6352 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:04.205510 sshd[6352]: Connection closed by 10.200.16.10 port 54468 Jan 14 01:23:04.206430 sshd-session[6348]: pam_unix(sshd:session): session closed for user core Jan 14 01:23:04.206000 audit[6348]: USER_END pid=6348 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:04.212778 systemd[1]: sshd@14-10.200.4.7:22-10.200.16.10:54468.service: Deactivated successfully. Jan 14 01:23:04.217828 kernel: audit: type=1106 audit(1768353784.206:841): pid=6348 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:04.217894 kernel: audit: type=1104 audit(1768353784.206:842): pid=6348 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:04.206000 audit[6348]: CRED_DISP pid=6348 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:04.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.4.7:22-10.200.16.10:54468 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:04.216817 systemd[1]: session-18.scope: Deactivated successfully. Jan 14 01:23:04.218246 systemd-logind[2479]: Session 18 logged out. Waiting for processes to exit. Jan 14 01:23:04.219421 systemd-logind[2479]: Removed session 18. Jan 14 01:23:04.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.4.7:22-10.200.16.10:54480 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:04.317125 systemd[1]: Started sshd@15-10.200.4.7:22-10.200.16.10:54480.service - OpenSSH per-connection server daemon (10.200.16.10:54480). Jan 14 01:23:04.740988 containerd[2507]: time="2026-01-14T01:23:04.740947821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:23:04.856000 audit[6364]: USER_ACCT pid=6364 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:04.858632 sshd[6364]: Accepted publickey for core from 10.200.16.10 port 54480 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:23:04.860184 sshd-session[6364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:23:04.857000 audit[6364]: CRED_ACQ pid=6364 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:04.857000 audit[6364]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc6f91ed90 a2=3 a3=0 items=0 ppid=1 pid=6364 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:04.857000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:23:04.867924 systemd-logind[2479]: New session 19 of user core. Jan 14 01:23:04.874162 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 14 01:23:04.876000 audit[6364]: USER_START pid=6364 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:04.879000 audit[6368]: CRED_ACQ pid=6368 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:05.004515 containerd[2507]: time="2026-01-14T01:23:05.004408766Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:23:05.009017 containerd[2507]: time="2026-01-14T01:23:05.008975881Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:23:05.009104 containerd[2507]: time="2026-01-14T01:23:05.009068081Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:23:05.011649 kubelet[4019]: E0114 01:23:05.011616 4019 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:23:05.011931 kubelet[4019]: E0114 01:23:05.011660 4019 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:23:05.011931 kubelet[4019]: E0114 01:23:05.011730 4019 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-dc47777bb-hmsh5_calico-apiserver(4c98e701-5dc1-42d2-b8b2-315dbbe213e6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:23:05.011931 kubelet[4019]: E0114 01:23:05.011762 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dc47777bb-hmsh5" podUID="4c98e701-5dc1-42d2-b8b2-315dbbe213e6" Jan 14 01:23:05.278444 sshd[6368]: Connection closed by 10.200.16.10 port 54480 Jan 14 01:23:05.279639 sshd-session[6364]: pam_unix(sshd:session): session closed for user core Jan 14 01:23:05.279000 audit[6364]: USER_END pid=6364 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:05.280000 audit[6364]: CRED_DISP pid=6364 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:05.283593 systemd-logind[2479]: Session 19 logged out. Waiting for processes to exit. Jan 14 01:23:05.284460 systemd[1]: sshd@15-10.200.4.7:22-10.200.16.10:54480.service: Deactivated successfully. Jan 14 01:23:05.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.4.7:22-10.200.16.10:54480 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:05.288222 systemd[1]: session-19.scope: Deactivated successfully. Jan 14 01:23:05.291430 systemd-logind[2479]: Removed session 19. Jan 14 01:23:05.394664 systemd[1]: Started sshd@16-10.200.4.7:22-10.200.16.10:54484.service - OpenSSH per-connection server daemon (10.200.16.10:54484). Jan 14 01:23:05.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.4.7:22-10.200.16.10:54484 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:05.937000 audit[6378]: USER_ACCT pid=6378 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:05.937994 sshd[6378]: Accepted publickey for core from 10.200.16.10 port 54484 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:23:05.938000 audit[6378]: CRED_ACQ pid=6378 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:05.938000 audit[6378]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe2fe641a0 a2=3 a3=0 items=0 ppid=1 pid=6378 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:05.938000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:23:05.941145 sshd-session[6378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:23:05.946049 systemd-logind[2479]: New session 20 of user core. Jan 14 01:23:05.949661 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 14 01:23:05.951000 audit[6378]: USER_START pid=6378 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:05.952000 audit[6396]: CRED_ACQ pid=6396 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:06.743893 kubelet[4019]: E0114 01:23:06.743709 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f76fbbdcb-fzd7d" podUID="ec043dd0-d5b9-4795-bda8-379dd9ed27d6" Jan 14 01:23:06.812000 audit[6406]: NETFILTER_CFG table=filter:141 family=2 entries=26 op=nft_register_rule pid=6406 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:23:06.812000 audit[6406]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffdb8794e70 a2=0 a3=7ffdb8794e5c items=0 ppid=4124 pid=6406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:06.812000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:23:06.817000 audit[6406]: NETFILTER_CFG table=nat:142 family=2 entries=20 op=nft_register_rule pid=6406 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:23:06.817000 audit[6406]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffdb8794e70 a2=0 a3=0 items=0 ppid=4124 pid=6406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:06.817000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:23:06.938579 sshd[6396]: Connection closed by 10.200.16.10 port 54484 Jan 14 01:23:06.940425 sshd-session[6378]: pam_unix(sshd:session): session closed for user core Jan 14 01:23:06.941000 audit[6378]: USER_END pid=6378 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:06.941000 audit[6378]: CRED_DISP pid=6378 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:06.944266 systemd-logind[2479]: Session 20 logged out. Waiting for processes to exit. Jan 14 01:23:06.944940 systemd[1]: sshd@16-10.200.4.7:22-10.200.16.10:54484.service: Deactivated successfully. Jan 14 01:23:06.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.4.7:22-10.200.16.10:54484 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:06.949090 systemd[1]: session-20.scope: Deactivated successfully. Jan 14 01:23:06.953041 systemd-logind[2479]: Removed session 20. Jan 14 01:23:07.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.4.7:22-10.200.16.10:54494 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:07.056768 systemd[1]: Started sshd@17-10.200.4.7:22-10.200.16.10:54494.service - OpenSSH per-connection server daemon (10.200.16.10:54494). Jan 14 01:23:07.596000 audit[6411]: USER_ACCT pid=6411 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:07.597014 sshd[6411]: Accepted publickey for core from 10.200.16.10 port 54494 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:23:07.597000 audit[6411]: CRED_ACQ pid=6411 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:07.597000 audit[6411]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffc77f49b0 a2=3 a3=0 items=0 ppid=1 pid=6411 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:07.597000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:23:07.598850 sshd-session[6411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:23:07.603942 systemd-logind[2479]: New session 21 of user core. Jan 14 01:23:07.610655 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 14 01:23:07.612000 audit[6411]: USER_START pid=6411 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:07.613000 audit[6415]: CRED_ACQ pid=6415 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:07.741864 containerd[2507]: time="2026-01-14T01:23:07.741833052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:23:07.870000 audit[6422]: NETFILTER_CFG table=filter:143 family=2 entries=38 op=nft_register_rule pid=6422 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:23:07.870000 audit[6422]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff4ee83dc0 a2=0 a3=7fff4ee83dac items=0 ppid=4124 pid=6422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:07.870000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:23:07.878000 audit[6422]: NETFILTER_CFG table=nat:144 family=2 entries=20 op=nft_register_rule pid=6422 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:23:07.878000 audit[6422]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff4ee83dc0 a2=0 a3=0 items=0 ppid=4124 pid=6422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:07.878000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:23:08.002530 containerd[2507]: time="2026-01-14T01:23:08.002475096Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:23:08.013839 containerd[2507]: time="2026-01-14T01:23:08.013774915Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:23:08.014010 containerd[2507]: time="2026-01-14T01:23:08.013826791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:23:08.014251 kubelet[4019]: E0114 01:23:08.014196 4019 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:23:08.014251 kubelet[4019]: E0114 01:23:08.014236 4019 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:23:08.014793 kubelet[4019]: E0114 01:23:08.014583 4019 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-c7dnf_calico-system(19451c9d-d740-439e-ba98-ce86a4dce532): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:23:08.016890 containerd[2507]: time="2026-01-14T01:23:08.016686316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:23:08.093502 sshd[6415]: Connection closed by 10.200.16.10 port 54494 Jan 14 01:23:08.094647 sshd-session[6411]: pam_unix(sshd:session): session closed for user core Jan 14 01:23:08.095000 audit[6411]: USER_END pid=6411 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:08.096000 audit[6411]: CRED_DISP pid=6411 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:08.101035 systemd[1]: sshd@17-10.200.4.7:22-10.200.16.10:54494.service: Deactivated successfully. Jan 14 01:23:08.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.4.7:22-10.200.16.10:54494 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:08.107042 systemd[1]: session-21.scope: Deactivated successfully. Jan 14 01:23:08.108142 systemd-logind[2479]: Session 21 logged out. Waiting for processes to exit. Jan 14 01:23:08.110780 systemd-logind[2479]: Removed session 21. Jan 14 01:23:08.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.4.7:22-10.200.16.10:54506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:08.228631 systemd[1]: Started sshd@18-10.200.4.7:22-10.200.16.10:54506.service - OpenSSH per-connection server daemon (10.200.16.10:54506). Jan 14 01:23:08.316951 containerd[2507]: time="2026-01-14T01:23:08.316913181Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:23:08.320834 containerd[2507]: time="2026-01-14T01:23:08.320798454Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:23:08.321002 containerd[2507]: time="2026-01-14T01:23:08.320848934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:23:08.321192 kubelet[4019]: E0114 01:23:08.321161 4019 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:23:08.321299 kubelet[4019]: E0114 01:23:08.321287 4019 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:23:08.321744 kubelet[4019]: E0114 01:23:08.321682 4019 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-c7dnf_calico-system(19451c9d-d740-439e-ba98-ce86a4dce532): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:23:08.321849 kubelet[4019]: E0114 01:23:08.321829 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c7dnf" podUID="19451c9d-d740-439e-ba98-ce86a4dce532" Jan 14 01:23:08.799135 kernel: kauditd_printk_skb: 47 callbacks suppressed Jan 14 01:23:08.799244 kernel: audit: type=1101 audit(1768353788.787:876): pid=6427 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:08.787000 audit[6427]: USER_ACCT pid=6427 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:08.793294 sshd-session[6427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:23:08.799623 sshd[6427]: Accepted publickey for core from 10.200.16.10 port 54506 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:23:08.789000 audit[6427]: CRED_ACQ pid=6427 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:08.808902 kernel: audit: type=1103 audit(1768353788.789:877): pid=6427 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:08.809026 kernel: audit: type=1006 audit(1768353788.789:878): pid=6427 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 14 01:23:08.789000 audit[6427]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe34b863c0 a2=3 a3=0 items=0 ppid=1 pid=6427 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:08.813559 systemd-logind[2479]: New session 22 of user core. Jan 14 01:23:08.815708 kernel: audit: type=1300 audit(1768353788.789:878): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe34b863c0 a2=3 a3=0 items=0 ppid=1 pid=6427 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:08.789000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:23:08.818319 kernel: audit: type=1327 audit(1768353788.789:878): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:23:08.820687 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 14 01:23:08.823000 audit[6427]: USER_START pid=6427 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:08.824000 audit[6431]: CRED_ACQ pid=6431 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:08.834429 kernel: audit: type=1105 audit(1768353788.823:879): pid=6427 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:08.834472 kernel: audit: type=1103 audit(1768353788.824:880): pid=6431 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:09.148625 sshd[6431]: Connection closed by 10.200.16.10 port 54506 Jan 14 01:23:09.150635 sshd-session[6427]: pam_unix(sshd:session): session closed for user core Jan 14 01:23:09.154000 audit[6427]: USER_END pid=6427 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:09.157507 systemd[1]: sshd@18-10.200.4.7:22-10.200.16.10:54506.service: Deactivated successfully. Jan 14 01:23:09.161684 systemd[1]: session-22.scope: Deactivated successfully. Jan 14 01:23:09.165580 kernel: audit: type=1106 audit(1768353789.154:881): pid=6427 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:09.166545 systemd-logind[2479]: Session 22 logged out. Waiting for processes to exit. Jan 14 01:23:09.169321 systemd-logind[2479]: Removed session 22. Jan 14 01:23:09.154000 audit[6427]: CRED_DISP pid=6427 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:09.182636 kernel: audit: type=1104 audit(1768353789.154:882): pid=6427 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:09.182683 kernel: audit: type=1131 audit(1768353789.155:883): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.4.7:22-10.200.16.10:54506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:09.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.4.7:22-10.200.16.10:54506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:09.741848 kubelet[4019]: E0114 01:23:09.741789 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-768b846847-hs7j8" podUID="ce22a880-122c-47be-98f6-68b9053dbdfd" Jan 14 01:23:10.742600 kubelet[4019]: E0114 01:23:10.742357 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-q7p7k" podUID="2bd1a43b-e98f-4a1f-8c59-f0c6872188ff" Jan 14 01:23:10.849000 audit[6443]: NETFILTER_CFG table=filter:145 family=2 entries=26 op=nft_register_rule pid=6443 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:23:10.849000 audit[6443]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffee002850 a2=0 a3=7fffee00283c items=0 ppid=4124 pid=6443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:10.849000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:23:10.855000 audit[6443]: NETFILTER_CFG table=nat:146 family=2 entries=104 op=nft_register_chain pid=6443 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:23:10.855000 audit[6443]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7fffee002850 a2=0 a3=7fffee00283c items=0 ppid=4124 pid=6443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:10.855000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:23:14.271988 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 14 01:23:14.272099 kernel: audit: type=1130 audit(1768353794.262:886): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.4.7:22-10.200.16.10:35610 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:14.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.4.7:22-10.200.16.10:35610 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:14.262768 systemd[1]: Started sshd@19-10.200.4.7:22-10.200.16.10:35610.service - OpenSSH per-connection server daemon (10.200.16.10:35610). Jan 14 01:23:14.819000 audit[6445]: USER_ACCT pid=6445 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:14.824465 sshd-session[6445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:23:14.827160 sshd[6445]: Accepted publickey for core from 10.200.16.10 port 35610 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:23:14.827734 kernel: audit: type=1101 audit(1768353794.819:887): pid=6445 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:14.819000 audit[6445]: CRED_ACQ pid=6445 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:14.835054 kernel: audit: type=1103 audit(1768353794.819:888): pid=6445 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:14.835308 systemd-logind[2479]: New session 23 of user core. Jan 14 01:23:14.837674 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 14 01:23:14.841517 kernel: audit: type=1006 audit(1768353794.819:889): pid=6445 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 14 01:23:14.819000 audit[6445]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffed1f9b660 a2=3 a3=0 items=0 ppid=1 pid=6445 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:14.849640 kernel: audit: type=1300 audit(1768353794.819:889): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffed1f9b660 a2=3 a3=0 items=0 ppid=1 pid=6445 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:14.819000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:23:14.841000 audit[6445]: USER_START pid=6445 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:14.858116 kernel: audit: type=1327 audit(1768353794.819:889): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:23:14.858276 kernel: audit: type=1105 audit(1768353794.841:890): pid=6445 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:14.849000 audit[6449]: CRED_ACQ pid=6449 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:14.866505 kernel: audit: type=1103 audit(1768353794.849:891): pid=6449 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:15.169289 sshd[6449]: Connection closed by 10.200.16.10 port 35610 Jan 14 01:23:15.170195 sshd-session[6445]: pam_unix(sshd:session): session closed for user core Jan 14 01:23:15.170000 audit[6445]: USER_END pid=6445 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:15.175433 systemd[1]: sshd@19-10.200.4.7:22-10.200.16.10:35610.service: Deactivated successfully. Jan 14 01:23:15.177903 systemd[1]: session-23.scope: Deactivated successfully. Jan 14 01:23:15.179515 kernel: audit: type=1106 audit(1768353795.170:892): pid=6445 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:15.172000 audit[6445]: CRED_DISP pid=6445 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:15.181687 systemd-logind[2479]: Session 23 logged out. Waiting for processes to exit. Jan 14 01:23:15.182539 systemd-logind[2479]: Removed session 23. Jan 14 01:23:15.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.4.7:22-10.200.16.10:35610 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:15.185543 kernel: audit: type=1104 audit(1768353795.172:893): pid=6445 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:17.741342 kubelet[4019]: E0114 01:23:17.740874 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dc47777bb-hmsh5" podUID="4c98e701-5dc1-42d2-b8b2-315dbbe213e6" Jan 14 01:23:17.741342 kubelet[4019]: E0114 01:23:17.741234 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dc47777bb-hzl5d" podUID="81ae1783-805d-45cb-a9d3-21a22f1883e1" Jan 14 01:23:18.743510 kubelet[4019]: E0114 01:23:18.742690 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c7dnf" podUID="19451c9d-d740-439e-ba98-ce86a4dce532" Jan 14 01:23:19.741519 kubelet[4019]: E0114 01:23:19.740983 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f76fbbdcb-fzd7d" podUID="ec043dd0-d5b9-4795-bda8-379dd9ed27d6" Jan 14 01:23:20.287349 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:23:20.287422 kernel: audit: type=1130 audit(1768353800.280:895): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.4.7:22-10.200.16.10:57516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:20.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.4.7:22-10.200.16.10:57516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:20.281235 systemd[1]: Started sshd@20-10.200.4.7:22-10.200.16.10:57516.service - OpenSSH per-connection server daemon (10.200.16.10:57516). Jan 14 01:23:20.825000 audit[6463]: USER_ACCT pid=6463 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:20.827680 sshd[6463]: Accepted publickey for core from 10.200.16.10 port 57516 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:23:20.829986 sshd-session[6463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:23:20.825000 audit[6463]: CRED_ACQ pid=6463 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:20.837694 kernel: audit: type=1101 audit(1768353800.825:896): pid=6463 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:20.837753 kernel: audit: type=1103 audit(1768353800.825:897): pid=6463 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:20.838043 systemd-logind[2479]: New session 24 of user core. Jan 14 01:23:20.840870 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 14 01:23:20.825000 audit[6463]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1dcea9b0 a2=3 a3=0 items=0 ppid=1 pid=6463 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:20.848597 kernel: audit: type=1006 audit(1768353800.825:898): pid=6463 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 14 01:23:20.848782 kernel: audit: type=1300 audit(1768353800.825:898): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1dcea9b0 a2=3 a3=0 items=0 ppid=1 pid=6463 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:20.825000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:23:20.843000 audit[6463]: USER_START pid=6463 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:20.861219 kernel: audit: type=1327 audit(1768353800.825:898): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:23:20.861314 kernel: audit: type=1105 audit(1768353800.843:899): pid=6463 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:20.854000 audit[6467]: CRED_ACQ pid=6467 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:20.867852 kernel: audit: type=1103 audit(1768353800.854:900): pid=6467 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:21.211518 sshd[6467]: Connection closed by 10.200.16.10 port 57516 Jan 14 01:23:21.212974 sshd-session[6463]: pam_unix(sshd:session): session closed for user core Jan 14 01:23:21.215000 audit[6463]: USER_END pid=6463 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:21.219832 systemd[1]: sshd@20-10.200.4.7:22-10.200.16.10:57516.service: Deactivated successfully. Jan 14 01:23:21.222324 systemd[1]: session-24.scope: Deactivated successfully. Jan 14 01:23:21.224381 systemd-logind[2479]: Session 24 logged out. Waiting for processes to exit. Jan 14 01:23:21.225950 kernel: audit: type=1106 audit(1768353801.215:901): pid=6463 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:21.215000 audit[6463]: CRED_DISP pid=6463 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:21.226693 systemd-logind[2479]: Removed session 24. Jan 14 01:23:21.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.4.7:22-10.200.16.10:57516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:21.232768 kernel: audit: type=1104 audit(1768353801.215:902): pid=6463 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:22.745702 kubelet[4019]: E0114 01:23:22.745561 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-768b846847-hs7j8" podUID="ce22a880-122c-47be-98f6-68b9053dbdfd" Jan 14 01:23:25.740161 kubelet[4019]: E0114 01:23:25.740121 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-q7p7k" podUID="2bd1a43b-e98f-4a1f-8c59-f0c6872188ff" Jan 14 01:23:26.320774 systemd[1]: Started sshd@21-10.200.4.7:22-10.200.16.10:57524.service - OpenSSH per-connection server daemon (10.200.16.10:57524). Jan 14 01:23:26.326165 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:23:26.326248 kernel: audit: type=1130 audit(1768353806.319:904): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.4.7:22-10.200.16.10:57524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:26.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.4.7:22-10.200.16.10:57524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:26.866000 audit[6501]: USER_ACCT pid=6501 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:26.869978 sshd-session[6501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:23:26.871901 sshd[6501]: Accepted publickey for core from 10.200.16.10 port 57524 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:23:26.867000 audit[6501]: CRED_ACQ pid=6501 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:26.876379 systemd-logind[2479]: New session 25 of user core. Jan 14 01:23:26.878878 kernel: audit: type=1101 audit(1768353806.866:905): pid=6501 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:26.878937 kernel: audit: type=1103 audit(1768353806.867:906): pid=6501 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:26.880676 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 14 01:23:26.883157 kernel: audit: type=1006 audit(1768353806.867:907): pid=6501 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 14 01:23:26.886666 kernel: audit: type=1300 audit(1768353806.867:907): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeff960350 a2=3 a3=0 items=0 ppid=1 pid=6501 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:26.867000 audit[6501]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeff960350 a2=3 a3=0 items=0 ppid=1 pid=6501 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:26.890558 kernel: audit: type=1327 audit(1768353806.867:907): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:23:26.890803 kernel: audit: type=1105 audit(1768353806.887:908): pid=6501 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:26.867000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:23:26.887000 audit[6501]: USER_START pid=6501 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:26.894000 audit[6505]: CRED_ACQ pid=6505 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:26.901553 kernel: audit: type=1103 audit(1768353806.894:909): pid=6505 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:27.255472 sshd[6505]: Connection closed by 10.200.16.10 port 57524 Jan 14 01:23:27.256552 sshd-session[6501]: pam_unix(sshd:session): session closed for user core Jan 14 01:23:27.256000 audit[6501]: USER_END pid=6501 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:27.267601 kernel: audit: type=1106 audit(1768353807.256:910): pid=6501 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:27.259975 systemd-logind[2479]: Session 25 logged out. Waiting for processes to exit. Jan 14 01:23:27.261865 systemd[1]: sshd@21-10.200.4.7:22-10.200.16.10:57524.service: Deactivated successfully. Jan 14 01:23:27.264394 systemd[1]: session-25.scope: Deactivated successfully. Jan 14 01:23:27.266535 systemd-logind[2479]: Removed session 25. Jan 14 01:23:27.256000 audit[6501]: CRED_DISP pid=6501 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:27.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.4.7:22-10.200.16.10:57524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:27.274503 kernel: audit: type=1104 audit(1768353807.256:911): pid=6501 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:30.743305 kubelet[4019]: E0114 01:23:30.743250 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f76fbbdcb-fzd7d" podUID="ec043dd0-d5b9-4795-bda8-379dd9ed27d6" Jan 14 01:23:31.740705 kubelet[4019]: E0114 01:23:31.740649 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-c7dnf" podUID="19451c9d-d740-439e-ba98-ce86a4dce532" Jan 14 01:23:32.371610 systemd[1]: Started sshd@22-10.200.4.7:22-10.200.16.10:54474.service - OpenSSH per-connection server daemon (10.200.16.10:54474). Jan 14 01:23:32.376830 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:23:32.376918 kernel: audit: type=1130 audit(1768353812.371:913): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.4.7:22-10.200.16.10:54474 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:32.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.4.7:22-10.200.16.10:54474 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:32.744017 kubelet[4019]: E0114 01:23:32.743975 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dc47777bb-hmsh5" podUID="4c98e701-5dc1-42d2-b8b2-315dbbe213e6" Jan 14 01:23:32.745722 kubelet[4019]: E0114 01:23:32.745683 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-dc47777bb-hzl5d" podUID="81ae1783-805d-45cb-a9d3-21a22f1883e1" Jan 14 01:23:32.924000 audit[6517]: USER_ACCT pid=6517 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:32.925196 sshd[6517]: Accepted publickey for core from 10.200.16.10 port 54474 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:23:32.928797 sshd-session[6517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:23:32.929544 kernel: audit: type=1101 audit(1768353812.924:914): pid=6517 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:32.929731 kernel: audit: type=1103 audit(1768353812.927:915): pid=6517 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:32.927000 audit[6517]: CRED_ACQ pid=6517 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:32.935384 kernel: audit: type=1006 audit(1768353812.927:916): pid=6517 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 14 01:23:32.927000 audit[6517]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc9cd84f20 a2=3 a3=0 items=0 ppid=1 pid=6517 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.939243 kernel: audit: type=1300 audit(1768353812.927:916): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc9cd84f20 a2=3 a3=0 items=0 ppid=1 pid=6517 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.940194 systemd-logind[2479]: New session 26 of user core. Jan 14 01:23:32.927000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:23:32.945508 kernel: audit: type=1327 audit(1768353812.927:916): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:23:32.946726 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 14 01:23:32.948000 audit[6517]: USER_START pid=6517 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:32.948000 audit[6521]: CRED_ACQ pid=6521 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:32.958086 kernel: audit: type=1105 audit(1768353812.948:917): pid=6517 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:32.958122 kernel: audit: type=1103 audit(1768353812.948:918): pid=6521 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:33.277885 sshd[6521]: Connection closed by 10.200.16.10 port 54474 Jan 14 01:23:33.278343 sshd-session[6517]: pam_unix(sshd:session): session closed for user core Jan 14 01:23:33.279000 audit[6517]: USER_END pid=6517 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:33.284009 systemd[1]: sshd@22-10.200.4.7:22-10.200.16.10:54474.service: Deactivated successfully. Jan 14 01:23:33.285609 kernel: audit: type=1106 audit(1768353813.279:919): pid=6517 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:33.279000 audit[6517]: CRED_DISP pid=6517 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:33.289545 systemd[1]: session-26.scope: Deactivated successfully. Jan 14 01:23:33.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.4.7:22-10.200.16.10:54474 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:33.290538 kernel: audit: type=1104 audit(1768353813.279:920): pid=6517 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:33.292636 systemd-logind[2479]: Session 26 logged out. Waiting for processes to exit. Jan 14 01:23:33.294971 systemd-logind[2479]: Removed session 26. Jan 14 01:23:33.740878 kubelet[4019]: E0114 01:23:33.740832 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-768b846847-hs7j8" podUID="ce22a880-122c-47be-98f6-68b9053dbdfd" Jan 14 01:23:38.398600 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:23:38.398991 kernel: audit: type=1130 audit(1768353818.394:922): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.4.7:22-10.200.16.10:54486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:38.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.4.7:22-10.200.16.10:54486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:38.395577 systemd[1]: Started sshd@23-10.200.4.7:22-10.200.16.10:54486.service - OpenSSH per-connection server daemon (10.200.16.10:54486). Jan 14 01:23:38.959000 audit[6533]: USER_ACCT pid=6533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:38.963537 sshd[6533]: Accepted publickey for core from 10.200.16.10 port 54486 ssh2: RSA SHA256:J8teR3AwO1E8/7X/9c+PYMIFVtj7X/YmG0aYnJ7jEeo Jan 14 01:23:38.965965 sshd-session[6533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:23:38.966548 kernel: audit: type=1101 audit(1768353818.959:923): pid=6533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:38.962000 audit[6533]: CRED_ACQ pid=6533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:38.972667 kernel: audit: type=1103 audit(1768353818.962:924): pid=6533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:38.972731 kernel: audit: type=1006 audit(1768353818.962:925): pid=6533 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 14 01:23:38.962000 audit[6533]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff5b4d0d70 a2=3 a3=0 items=0 ppid=1 pid=6533 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:38.977512 systemd-logind[2479]: New session 27 of user core. Jan 14 01:23:38.983607 kernel: audit: type=1300 audit(1768353818.962:925): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff5b4d0d70 a2=3 a3=0 items=0 ppid=1 pid=6533 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:38.962000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:23:38.987826 kernel: audit: type=1327 audit(1768353818.962:925): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:23:38.986936 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 14 01:23:38.988000 audit[6533]: USER_START pid=6533 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:38.996519 kernel: audit: type=1105 audit(1768353818.988:926): pid=6533 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:38.992000 audit[6537]: CRED_ACQ pid=6537 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:39.002528 kernel: audit: type=1103 audit(1768353818.992:927): pid=6537 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:39.319602 sshd[6537]: Connection closed by 10.200.16.10 port 54486 Jan 14 01:23:39.320317 sshd-session[6533]: pam_unix(sshd:session): session closed for user core Jan 14 01:23:39.320000 audit[6533]: USER_END pid=6533 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:39.327874 systemd[1]: sshd@23-10.200.4.7:22-10.200.16.10:54486.service: Deactivated successfully. Jan 14 01:23:39.332874 kernel: audit: type=1106 audit(1768353819.320:928): pid=6533 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:39.332932 kernel: audit: type=1104 audit(1768353819.320:929): pid=6533 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:39.320000 audit[6533]: CRED_DISP pid=6533 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 14 01:23:39.331742 systemd[1]: session-27.scope: Deactivated successfully. Jan 14 01:23:39.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.4.7:22-10.200.16.10:54486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:39.333479 systemd-logind[2479]: Session 27 logged out. Waiting for processes to exit. Jan 14 01:23:39.334653 systemd-logind[2479]: Removed session 27. Jan 14 01:23:40.743611 kubelet[4019]: E0114 01:23:40.743436 4019 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-q7p7k" podUID="2bd1a43b-e98f-4a1f-8c59-f0c6872188ff"