Dec 16 03:21:38.220236 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Dec 16 00:18:19 -00 2025 Dec 16 03:21:38.220266 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 03:21:38.220279 kernel: BIOS-provided physical RAM map: Dec 16 03:21:38.220287 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 16 03:21:38.220295 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Dec 16 03:21:38.220303 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Dec 16 03:21:38.220312 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Dec 16 03:21:38.220319 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Dec 16 03:21:38.220326 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Dec 16 03:21:38.220335 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Dec 16 03:21:38.220343 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Dec 16 03:21:38.220350 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Dec 16 03:21:38.220357 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Dec 16 03:21:38.220365 kernel: printk: legacy bootconsole [earlyser0] enabled Dec 16 03:21:38.220374 kernel: NX (Execute Disable) protection: active Dec 16 03:21:38.220384 kernel: APIC: Static calls initialized Dec 16 03:21:38.220392 kernel: efi: EFI v2.7 by Microsoft Dec 16 03:21:38.220401 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3eaa1018 RNG=0x3ffd2018 Dec 16 03:21:38.220410 kernel: random: crng init done Dec 16 03:21:38.220418 kernel: secureboot: Secure boot disabled Dec 16 03:21:38.220426 kernel: SMBIOS 3.1.0 present. Dec 16 03:21:38.220434 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/25/2025 Dec 16 03:21:38.220442 kernel: DMI: Memory slots populated: 2/2 Dec 16 03:21:38.220449 kernel: Hypervisor detected: Microsoft Hyper-V Dec 16 03:21:38.220457 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Dec 16 03:21:38.220467 kernel: Hyper-V: Nested features: 0x3e0101 Dec 16 03:21:38.220475 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Dec 16 03:21:38.220484 kernel: Hyper-V: Using hypercall for remote TLB flush Dec 16 03:21:38.220493 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 16 03:21:38.220501 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 16 03:21:38.220508 kernel: tsc: Detected 2300.000 MHz processor Dec 16 03:21:38.220516 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 03:21:38.220526 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 03:21:38.220534 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Dec 16 03:21:38.220546 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 16 03:21:38.220555 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 03:21:38.220564 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Dec 16 03:21:38.220573 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Dec 16 03:21:38.220582 kernel: Using GB pages for direct mapping Dec 16 03:21:38.220591 kernel: ACPI: Early table checksum verification disabled Dec 16 03:21:38.220604 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Dec 16 03:21:38.220613 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 03:21:38.220621 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 03:21:38.220631 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 16 03:21:38.220640 kernel: ACPI: FACS 0x000000003FFFE000 000040 Dec 16 03:21:38.220650 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 03:21:38.220661 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 03:21:38.220670 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 03:21:38.220679 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Dec 16 03:21:38.220688 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Dec 16 03:21:38.220696 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 03:21:38.220705 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Dec 16 03:21:38.220715 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Dec 16 03:21:38.220725 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Dec 16 03:21:38.220734 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Dec 16 03:21:38.220744 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Dec 16 03:21:38.220753 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Dec 16 03:21:38.220762 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Dec 16 03:21:38.220771 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Dec 16 03:21:38.220781 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Dec 16 03:21:38.220790 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Dec 16 03:21:38.220799 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Dec 16 03:21:38.220809 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Dec 16 03:21:38.220818 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Dec 16 03:21:38.220828 kernel: Zone ranges: Dec 16 03:21:38.220838 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 03:21:38.220847 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 16 03:21:38.220856 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Dec 16 03:21:38.220865 kernel: Device empty Dec 16 03:21:38.220873 kernel: Movable zone start for each node Dec 16 03:21:38.220883 kernel: Early memory node ranges Dec 16 03:21:38.220893 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 16 03:21:38.220902 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Dec 16 03:21:38.220913 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Dec 16 03:21:38.220922 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Dec 16 03:21:38.220931 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Dec 16 03:21:38.220939 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Dec 16 03:21:38.220948 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 03:21:38.220957 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 16 03:21:38.220967 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Dec 16 03:21:38.220978 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Dec 16 03:21:38.220987 kernel: ACPI: PM-Timer IO Port: 0x408 Dec 16 03:21:38.220996 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Dec 16 03:21:38.221005 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 03:21:38.221013 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 03:21:38.221022 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 03:21:38.221030 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Dec 16 03:21:38.221039 kernel: TSC deadline timer available Dec 16 03:21:38.221051 kernel: CPU topo: Max. logical packages: 1 Dec 16 03:21:38.221060 kernel: CPU topo: Max. logical dies: 1 Dec 16 03:21:38.221069 kernel: CPU topo: Max. dies per package: 1 Dec 16 03:21:38.221079 kernel: CPU topo: Max. threads per core: 2 Dec 16 03:21:38.221087 kernel: CPU topo: Num. cores per package: 1 Dec 16 03:21:38.221096 kernel: CPU topo: Num. threads per package: 2 Dec 16 03:21:38.221105 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Dec 16 03:21:38.221115 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Dec 16 03:21:38.221125 kernel: Booting paravirtualized kernel on Hyper-V Dec 16 03:21:38.221149 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 03:21:38.221160 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 16 03:21:38.221168 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Dec 16 03:21:38.221177 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Dec 16 03:21:38.221186 kernel: pcpu-alloc: [0] 0 1 Dec 16 03:21:38.221196 kernel: Hyper-V: PV spinlocks enabled Dec 16 03:21:38.221206 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 03:21:38.221216 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 03:21:38.221226 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 16 03:21:38.221236 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 03:21:38.221245 kernel: Fallback order for Node 0: 0 Dec 16 03:21:38.221255 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Dec 16 03:21:38.221264 kernel: Policy zone: Normal Dec 16 03:21:38.221272 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 03:21:38.221281 kernel: software IO TLB: area num 2. Dec 16 03:21:38.221291 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 03:21:38.221300 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 03:21:38.221309 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 03:21:38.221319 kernel: Dynamic Preempt: voluntary Dec 16 03:21:38.221329 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 03:21:38.221339 kernel: rcu: RCU event tracing is enabled. Dec 16 03:21:38.221355 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 03:21:38.221366 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 03:21:38.221376 kernel: Rude variant of Tasks RCU enabled. Dec 16 03:21:38.221387 kernel: Tracing variant of Tasks RCU enabled. Dec 16 03:21:38.221397 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 03:21:38.221406 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 03:21:38.221416 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 03:21:38.221427 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 03:21:38.221437 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 03:21:38.221446 kernel: Using NULL legacy PIC Dec 16 03:21:38.221457 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Dec 16 03:21:38.221469 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 03:21:38.221479 kernel: Console: colour dummy device 80x25 Dec 16 03:21:38.221489 kernel: printk: legacy console [tty1] enabled Dec 16 03:21:38.221498 kernel: printk: legacy console [ttyS0] enabled Dec 16 03:21:38.221507 kernel: printk: legacy bootconsole [earlyser0] disabled Dec 16 03:21:38.221516 kernel: ACPI: Core revision 20240827 Dec 16 03:21:38.221526 kernel: Failed to register legacy timer interrupt Dec 16 03:21:38.221537 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 03:21:38.221547 kernel: x2apic enabled Dec 16 03:21:38.221557 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 03:21:38.221567 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Dec 16 03:21:38.221577 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 16 03:21:38.221586 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Dec 16 03:21:38.221596 kernel: Hyper-V: Using IPI hypercalls Dec 16 03:21:38.221607 kernel: APIC: send_IPI() replaced with hv_send_ipi() Dec 16 03:21:38.221616 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Dec 16 03:21:38.221626 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Dec 16 03:21:38.221636 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Dec 16 03:21:38.221647 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Dec 16 03:21:38.221657 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Dec 16 03:21:38.221666 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Dec 16 03:21:38.221678 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4600.00 BogoMIPS (lpj=2300000) Dec 16 03:21:38.221687 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 16 03:21:38.221696 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 16 03:21:38.221706 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 16 03:21:38.221716 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 03:21:38.221725 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 03:21:38.221735 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 03:21:38.221744 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 16 03:21:38.221754 kernel: RETBleed: Vulnerable Dec 16 03:21:38.221763 kernel: Speculative Store Bypass: Vulnerable Dec 16 03:21:38.221772 kernel: active return thunk: its_return_thunk Dec 16 03:21:38.221780 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 16 03:21:38.221789 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 03:21:38.221799 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 03:21:38.221808 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 03:21:38.221818 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 16 03:21:38.221827 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 16 03:21:38.221836 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 16 03:21:38.221846 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Dec 16 03:21:38.221855 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Dec 16 03:21:38.221864 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Dec 16 03:21:38.221873 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 03:21:38.221883 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Dec 16 03:21:38.221892 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Dec 16 03:21:38.221902 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Dec 16 03:21:38.221911 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Dec 16 03:21:38.221920 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Dec 16 03:21:38.221929 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Dec 16 03:21:38.221939 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Dec 16 03:21:38.221953 kernel: Freeing SMP alternatives memory: 32K Dec 16 03:21:38.221963 kernel: pid_max: default: 32768 minimum: 301 Dec 16 03:21:38.221972 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 03:21:38.221981 kernel: landlock: Up and running. Dec 16 03:21:38.221991 kernel: SELinux: Initializing. Dec 16 03:21:38.222000 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 03:21:38.222009 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 03:21:38.222020 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Dec 16 03:21:38.222031 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Dec 16 03:21:38.222042 kernel: signal: max sigframe size: 11952 Dec 16 03:21:38.222061 kernel: rcu: Hierarchical SRCU implementation. Dec 16 03:21:38.222072 kernel: rcu: Max phase no-delay instances is 400. Dec 16 03:21:38.222083 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 03:21:38.222093 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 16 03:21:38.222104 kernel: smp: Bringing up secondary CPUs ... Dec 16 03:21:38.222113 kernel: smpboot: x86: Booting SMP configuration: Dec 16 03:21:38.222124 kernel: .... node #0, CPUs: #1 Dec 16 03:21:38.222150 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 03:21:38.222160 kernel: smpboot: Total of 2 processors activated (9200.00 BogoMIPS) Dec 16 03:21:38.222171 kernel: Memory: 8093408K/8383228K available (14336K kernel code, 2444K rwdata, 31636K rodata, 15556K init, 2484K bss, 283604K reserved, 0K cma-reserved) Dec 16 03:21:38.222181 kernel: devtmpfs: initialized Dec 16 03:21:38.222192 kernel: x86/mm: Memory block size: 128MB Dec 16 03:21:38.222202 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Dec 16 03:21:38.222214 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 03:21:38.222229 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 03:21:38.222239 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 03:21:38.222249 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 03:21:38.222259 kernel: audit: initializing netlink subsys (disabled) Dec 16 03:21:38.222270 kernel: audit: type=2000 audit(1765855293.070:1): state=initialized audit_enabled=0 res=1 Dec 16 03:21:38.222279 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 03:21:38.222290 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 03:21:38.222300 kernel: cpuidle: using governor menu Dec 16 03:21:38.222315 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 03:21:38.222324 kernel: dca service started, version 1.12.1 Dec 16 03:21:38.222334 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Dec 16 03:21:38.222345 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Dec 16 03:21:38.222354 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 03:21:38.222363 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 03:21:38.222379 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 03:21:38.222390 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 03:21:38.222401 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 03:21:38.222411 kernel: ACPI: Added _OSI(Module Device) Dec 16 03:21:38.222420 kernel: ACPI: Added _OSI(Processor Device) Dec 16 03:21:38.222431 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 03:21:38.222440 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 03:21:38.222450 kernel: ACPI: Interpreter enabled Dec 16 03:21:38.222464 kernel: ACPI: PM: (supports S0 S5) Dec 16 03:21:38.222474 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 03:21:38.222484 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 03:21:38.222495 kernel: PCI: Ignoring E820 reservations for host bridge windows Dec 16 03:21:38.222504 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Dec 16 03:21:38.222513 kernel: iommu: Default domain type: Translated Dec 16 03:21:38.222528 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 03:21:38.222538 kernel: efivars: Registered efivars operations Dec 16 03:21:38.222547 kernel: PCI: Using ACPI for IRQ routing Dec 16 03:21:38.222556 kernel: PCI: System does not support PCI Dec 16 03:21:38.222565 kernel: vgaarb: loaded Dec 16 03:21:38.222575 kernel: clocksource: Switched to clocksource tsc-early Dec 16 03:21:38.222584 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 03:21:38.222593 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 03:21:38.222604 kernel: pnp: PnP ACPI init Dec 16 03:21:38.222613 kernel: pnp: PnP ACPI: found 3 devices Dec 16 03:21:38.222623 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 03:21:38.222658 kernel: NET: Registered PF_INET protocol family Dec 16 03:21:38.222669 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 03:21:38.222679 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 16 03:21:38.222690 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 03:21:38.222702 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 03:21:38.222713 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 16 03:21:38.222723 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 16 03:21:38.222733 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 16 03:21:38.222743 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 16 03:21:38.222751 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 03:21:38.222760 kernel: NET: Registered PF_XDP protocol family Dec 16 03:21:38.222771 kernel: PCI: CLS 0 bytes, default 64 Dec 16 03:21:38.222781 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 16 03:21:38.222790 kernel: software IO TLB: mapped [mem 0x000000003a9ba000-0x000000003e9ba000] (64MB) Dec 16 03:21:38.222800 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Dec 16 03:21:38.224211 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Dec 16 03:21:38.224223 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Dec 16 03:21:38.224232 kernel: clocksource: Switched to clocksource tsc Dec 16 03:21:38.224245 kernel: Initialise system trusted keyrings Dec 16 03:21:38.224254 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 16 03:21:38.224263 kernel: Key type asymmetric registered Dec 16 03:21:38.224273 kernel: Asymmetric key parser 'x509' registered Dec 16 03:21:38.224282 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 03:21:38.224292 kernel: io scheduler mq-deadline registered Dec 16 03:21:38.224300 kernel: io scheduler kyber registered Dec 16 03:21:38.224311 kernel: io scheduler bfq registered Dec 16 03:21:38.224320 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 03:21:38.224330 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 03:21:38.224340 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 03:21:38.224349 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 16 03:21:38.224359 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 03:21:38.224370 kernel: i8042: PNP: No PS/2 controller found. Dec 16 03:21:38.224550 kernel: rtc_cmos 00:02: registered as rtc0 Dec 16 03:21:38.224662 kernel: rtc_cmos 00:02: setting system clock to 2025-12-16T03:21:34 UTC (1765855294) Dec 16 03:21:38.224766 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Dec 16 03:21:38.224778 kernel: intel_pstate: Intel P-state driver initializing Dec 16 03:21:38.224788 kernel: efifb: probing for efifb Dec 16 03:21:38.224798 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 16 03:21:38.224811 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 16 03:21:38.224822 kernel: efifb: scrolling: redraw Dec 16 03:21:38.224832 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 16 03:21:38.224841 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 03:21:38.224851 kernel: fb0: EFI VGA frame buffer device Dec 16 03:21:38.224860 kernel: pstore: Using crash dump compression: deflate Dec 16 03:21:38.224870 kernel: pstore: Registered efi_pstore as persistent store backend Dec 16 03:21:38.224882 kernel: NET: Registered PF_INET6 protocol family Dec 16 03:21:38.224892 kernel: Segment Routing with IPv6 Dec 16 03:21:38.224903 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 03:21:38.224913 kernel: NET: Registered PF_PACKET protocol family Dec 16 03:21:38.224923 kernel: Key type dns_resolver registered Dec 16 03:21:38.224932 kernel: IPI shorthand broadcast: enabled Dec 16 03:21:38.224942 kernel: sched_clock: Marking stable (2019216545, 107946779)->(2480556716, -353393392) Dec 16 03:21:38.224978 kernel: registered taskstats version 1 Dec 16 03:21:38.224991 kernel: Loading compiled-in X.509 certificates Dec 16 03:21:38.225002 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: aafd1eb27ea805b8231c3bede9210239fae84df8' Dec 16 03:21:38.225012 kernel: Demotion targets for Node 0: null Dec 16 03:21:38.225021 kernel: Key type .fscrypt registered Dec 16 03:21:38.225031 kernel: Key type fscrypt-provisioning registered Dec 16 03:21:38.225040 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 03:21:38.225050 kernel: ima: Allocated hash algorithm: sha1 Dec 16 03:21:38.225062 kernel: ima: No architecture policies found Dec 16 03:21:38.225072 kernel: clk: Disabling unused clocks Dec 16 03:21:38.225083 kernel: Freeing unused kernel image (initmem) memory: 15556K Dec 16 03:21:38.225093 kernel: Write protecting the kernel read-only data: 47104k Dec 16 03:21:38.225103 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Dec 16 03:21:38.225112 kernel: Run /init as init process Dec 16 03:21:38.225122 kernel: with arguments: Dec 16 03:21:38.225132 kernel: /init Dec 16 03:21:38.225160 kernel: with environment: Dec 16 03:21:38.225170 kernel: HOME=/ Dec 16 03:21:38.225180 kernel: TERM=linux Dec 16 03:21:38.225189 kernel: hv_vmbus: Vmbus version:5.3 Dec 16 03:21:38.225199 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 16 03:21:38.225209 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 16 03:21:38.225218 kernel: PTP clock support registered Dec 16 03:21:38.225231 kernel: hv_utils: Registering HyperV Utility Driver Dec 16 03:21:38.225241 kernel: hv_vmbus: registering driver hv_utils Dec 16 03:21:38.225252 kernel: hv_utils: Shutdown IC version 3.2 Dec 16 03:21:38.225262 kernel: hv_utils: Heartbeat IC version 3.0 Dec 16 03:21:38.225271 kernel: hv_utils: TimeSync IC version 4.0 Dec 16 03:21:38.225280 kernel: SCSI subsystem initialized Dec 16 03:21:38.225290 kernel: hv_vmbus: registering driver hv_pci Dec 16 03:21:38.225445 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Dec 16 03:21:38.225561 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Dec 16 03:21:38.225694 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Dec 16 03:21:38.225809 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Dec 16 03:21:38.225956 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Dec 16 03:21:38.226087 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Dec 16 03:21:38.228583 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Dec 16 03:21:38.228738 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Dec 16 03:21:38.228753 kernel: hv_vmbus: registering driver hv_storvsc Dec 16 03:21:38.228917 kernel: scsi host0: storvsc_host_t Dec 16 03:21:38.229086 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Dec 16 03:21:38.229102 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 03:21:38.229114 kernel: hv_vmbus: registering driver hid_hyperv Dec 16 03:21:38.229127 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Dec 16 03:21:38.229291 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 16 03:21:38.229306 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 16 03:21:38.229319 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Dec 16 03:21:38.229423 kernel: nvme nvme0: pci function c05b:00:00.0 Dec 16 03:21:38.229549 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Dec 16 03:21:38.229640 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 16 03:21:38.229653 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 03:21:38.229778 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 16 03:21:38.229791 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 16 03:21:38.229910 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 16 03:21:38.229921 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 03:21:38.229932 kernel: device-mapper: uevent: version 1.0.3 Dec 16 03:21:38.229942 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 03:21:38.229951 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Dec 16 03:21:38.229974 kernel: raid6: avx512x4 gen() 41765 MB/s Dec 16 03:21:38.229985 kernel: raid6: avx512x2 gen() 41024 MB/s Dec 16 03:21:38.230003 kernel: raid6: avx512x1 gen() 25513 MB/s Dec 16 03:21:38.230012 kernel: raid6: avx2x4 gen() 35455 MB/s Dec 16 03:21:38.230022 kernel: raid6: avx2x2 gen() 37784 MB/s Dec 16 03:21:38.230032 kernel: raid6: avx2x1 gen() 30603 MB/s Dec 16 03:21:38.230042 kernel: raid6: using algorithm avx512x4 gen() 41765 MB/s Dec 16 03:21:38.230055 kernel: raid6: .... xor() 7354 MB/s, rmw enabled Dec 16 03:21:38.230065 kernel: raid6: using avx512x2 recovery algorithm Dec 16 03:21:38.230075 kernel: xor: automatically using best checksumming function avx Dec 16 03:21:38.230084 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 03:21:38.230095 kernel: BTRFS: device fsid 57a8262f-2900-48ba-a17e-aafbd70d59c7 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (913) Dec 16 03:21:38.230106 kernel: BTRFS info (device dm-0): first mount of filesystem 57a8262f-2900-48ba-a17e-aafbd70d59c7 Dec 16 03:21:38.230116 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 03:21:38.230128 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 16 03:21:38.232012 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 03:21:38.232029 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 03:21:38.232040 kernel: loop: module loaded Dec 16 03:21:38.232051 kernel: loop0: detected capacity change from 0 to 100528 Dec 16 03:21:38.232062 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 03:21:38.232074 systemd[1]: Successfully made /usr/ read-only. Dec 16 03:21:38.232095 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 03:21:38.232107 systemd[1]: Detected virtualization microsoft. Dec 16 03:21:38.232119 systemd[1]: Detected architecture x86-64. Dec 16 03:21:38.232130 systemd[1]: Running in initrd. Dec 16 03:21:38.232184 systemd[1]: No hostname configured, using default hostname. Dec 16 03:21:38.232198 systemd[1]: Hostname set to . Dec 16 03:21:38.232214 systemd[1]: Initializing machine ID from random generator. Dec 16 03:21:38.232225 systemd[1]: Queued start job for default target initrd.target. Dec 16 03:21:38.232236 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 03:21:38.232248 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 03:21:38.232259 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 03:21:38.232272 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 03:21:38.232287 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 03:21:38.232298 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 03:21:38.232311 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 03:21:38.232323 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 03:21:38.232338 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 03:21:38.232349 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 03:21:38.232362 systemd[1]: Reached target paths.target - Path Units. Dec 16 03:21:38.232378 systemd[1]: Reached target slices.target - Slice Units. Dec 16 03:21:38.232389 systemd[1]: Reached target swap.target - Swaps. Dec 16 03:21:38.232400 systemd[1]: Reached target timers.target - Timer Units. Dec 16 03:21:38.232414 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 03:21:38.232426 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 03:21:38.232438 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 03:21:38.232449 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 03:21:38.232460 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 03:21:38.232471 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 03:21:38.232482 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 03:21:38.232496 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 03:21:38.232508 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 03:21:38.232521 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 03:21:38.232533 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 03:21:38.232545 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 03:21:38.232557 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 03:21:38.232569 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 03:21:38.232584 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 03:21:38.232596 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 03:21:38.232607 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 03:21:38.232619 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:21:38.232659 systemd-journald[1049]: Collecting audit messages is enabled. Dec 16 03:21:38.232684 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 03:21:38.232694 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 03:21:38.232709 kernel: audit: type=1130 audit(1765855298.225:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:38.232724 systemd-journald[1049]: Journal started Dec 16 03:21:38.232749 systemd-journald[1049]: Runtime Journal (/run/log/journal/3360ea03e90247f19e842d711a06562f) is 8M, max 158.5M, 150.5M free. Dec 16 03:21:38.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:38.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:38.239152 kernel: audit: type=1130 audit(1765855298.234:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:38.239184 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 03:21:38.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:38.243564 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 03:21:38.257245 kernel: audit: type=1130 audit(1765855298.242:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:38.257272 kernel: audit: type=1130 audit(1765855298.246:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:38.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:38.250265 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 03:21:38.261154 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 03:21:38.456679 systemd-tmpfiles[1060]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 03:21:38.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:38.458623 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 03:21:38.469413 kernel: audit: type=1130 audit(1765855298.458:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:38.462624 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 03:21:38.487204 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 03:21:38.507515 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 03:21:38.511629 kernel: Bridge firewalling registered Dec 16 03:21:38.511082 systemd-modules-load[1053]: Inserted module 'br_netfilter' Dec 16 03:21:38.517494 kernel: audit: type=1130 audit(1765855298.512:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:38.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:38.514111 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 03:21:38.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:38.523960 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 03:21:38.527227 kernel: audit: type=1130 audit(1765855298.517:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:38.528250 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 03:21:38.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:38.536161 kernel: audit: type=1130 audit(1765855298.524:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:38.538894 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:21:38.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:38.551255 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 03:21:38.555232 kernel: audit: type=1130 audit(1765855298.544:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:38.551832 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 03:21:38.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:38.557000 audit: BPF prog-id=6 op=LOAD Dec 16 03:21:38.567262 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 03:21:38.582717 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 03:21:38.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:38.588871 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 03:21:38.625056 dracut-cmdline[1089]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 03:21:38.661499 systemd-resolved[1077]: Positive Trust Anchors: Dec 16 03:21:38.661512 systemd-resolved[1077]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 03:21:38.661516 systemd-resolved[1077]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 03:21:38.661556 systemd-resolved[1077]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 03:21:38.708305 systemd-resolved[1077]: Defaulting to hostname 'linux'. Dec 16 03:21:38.710692 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 03:21:38.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:38.717418 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 03:21:38.726243 kernel: kauditd_printk_skb: 3 callbacks suppressed Dec 16 03:21:38.726269 kernel: audit: type=1130 audit(1765855298.716:14): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:38.795166 kernel: Loading iSCSI transport class v2.0-870. Dec 16 03:21:38.853158 kernel: iscsi: registered transport (tcp) Dec 16 03:21:38.919485 kernel: iscsi: registered transport (qla4xxx) Dec 16 03:21:38.919536 kernel: QLogic iSCSI HBA Driver Dec 16 03:21:38.968412 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 03:21:38.988923 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 03:21:38.998302 kernel: audit: type=1130 audit(1765855298.988:15): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:38.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:38.989702 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 03:21:39.032657 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 03:21:39.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:39.040184 kernel: audit: type=1130 audit(1765855299.034:16): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:39.040298 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 03:21:39.044470 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 03:21:39.073405 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 03:21:39.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:39.083186 kernel: audit: type=1130 audit(1765855299.077:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:39.083340 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 03:21:39.077000 audit: BPF prog-id=7 op=LOAD Dec 16 03:21:39.090698 kernel: audit: type=1334 audit(1765855299.077:18): prog-id=7 op=LOAD Dec 16 03:21:39.090795 kernel: audit: type=1334 audit(1765855299.077:19): prog-id=8 op=LOAD Dec 16 03:21:39.077000 audit: BPF prog-id=8 op=LOAD Dec 16 03:21:39.120447 systemd-udevd[1330]: Using default interface naming scheme 'v257'. Dec 16 03:21:39.133317 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 03:21:39.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:39.142392 kernel: audit: type=1130 audit(1765855299.135:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:39.143956 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 03:21:39.159080 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 03:21:39.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:39.169171 kernel: audit: type=1130 audit(1765855299.164:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:39.170282 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 03:21:39.176326 kernel: audit: type=1334 audit(1765855299.168:22): prog-id=9 op=LOAD Dec 16 03:21:39.168000 audit: BPF prog-id=9 op=LOAD Dec 16 03:21:39.179182 dracut-pre-trigger[1417]: rd.md=0: removing MD RAID activation Dec 16 03:21:39.205949 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 03:21:39.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:39.217157 kernel: audit: type=1130 audit(1765855299.210:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:39.216298 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 03:21:39.229657 systemd-networkd[1431]: lo: Link UP Dec 16 03:21:39.229880 systemd-networkd[1431]: lo: Gained carrier Dec 16 03:21:39.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:39.230370 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 03:21:39.236290 systemd[1]: Reached target network.target - Network. Dec 16 03:21:39.268496 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 03:21:39.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:39.273884 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 03:21:39.342440 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 03:21:39.343316 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:21:39.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:39.348270 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:21:39.353618 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:21:39.388670 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 03:21:39.388712 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#217 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 03:21:39.404186 kernel: hv_vmbus: registering driver hv_netvsc Dec 16 03:21:39.413458 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d741e2e (unnamed net_device) (uninitialized): VF slot 1 added Dec 16 03:21:39.415080 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 03:21:39.415822 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:21:39.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:39.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:39.428583 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:21:39.439801 systemd-networkd[1431]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:21:39.439808 systemd-networkd[1431]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 03:21:39.443024 systemd-networkd[1431]: eth0: Link UP Dec 16 03:21:39.443484 systemd-networkd[1431]: eth0: Gained carrier Dec 16 03:21:39.455824 kernel: AES CTR mode by8 optimization enabled Dec 16 03:21:39.443497 systemd-networkd[1431]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:21:39.465217 systemd-networkd[1431]: eth0: DHCPv4 address 10.200.8.23/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 16 03:21:39.504673 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:21:39.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:39.591163 kernel: nvme nvme0: using unchecked data buffer Dec 16 03:21:39.685060 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Dec 16 03:21:39.689469 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 03:21:39.800956 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Dec 16 03:21:39.813175 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Dec 16 03:21:39.853676 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Dec 16 03:21:39.976739 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 03:21:39.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:39.981680 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 03:21:39.986775 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 03:21:39.992464 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 03:21:39.997126 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 03:21:40.029272 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 03:21:40.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:40.438990 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Dec 16 03:21:40.439268 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Dec 16 03:21:40.441846 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Dec 16 03:21:40.443592 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Dec 16 03:21:40.448264 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Dec 16 03:21:40.452188 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Dec 16 03:21:40.457315 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Dec 16 03:21:40.457398 kernel: pci 7870:00:00.0: enabling Extended Tags Dec 16 03:21:40.473747 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Dec 16 03:21:40.473947 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Dec 16 03:21:40.478179 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Dec 16 03:21:40.495317 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Dec 16 03:21:40.505152 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Dec 16 03:21:40.508741 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d741e2e eth0: VF registering: eth1 Dec 16 03:21:40.508899 kernel: mana 7870:00:00.0 eth1: joined to eth0 Dec 16 03:21:40.513157 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Dec 16 03:21:40.513255 systemd-networkd[1431]: eth1: Interface name change detected, renamed to enP30832s1. Dec 16 03:21:40.612181 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Dec 16 03:21:40.615173 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Dec 16 03:21:40.615857 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d741e2e eth0: Data path switched to VF: enP30832s1 Dec 16 03:21:40.616764 systemd-networkd[1431]: enP30832s1: Link UP Dec 16 03:21:40.618048 systemd-networkd[1431]: enP30832s1: Gained carrier Dec 16 03:21:40.992789 disk-uuid[1619]: Warning: The kernel is still using the old partition table. Dec 16 03:21:40.992789 disk-uuid[1619]: The new table will be used at the next reboot or after you Dec 16 03:21:40.992789 disk-uuid[1619]: run partprobe(8) or kpartx(8) Dec 16 03:21:40.992789 disk-uuid[1619]: The operation has completed successfully. Dec 16 03:21:41.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:41.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:40.999634 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 03:21:40.999747 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 03:21:41.004459 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 03:21:41.052156 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1665) Dec 16 03:21:41.052206 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:21:41.054897 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 03:21:41.075399 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 03:21:41.075447 kernel: BTRFS info (device nvme0n1p6): turning on async discard Dec 16 03:21:41.076439 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 03:21:41.083161 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:21:41.083882 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 03:21:41.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:41.087964 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 03:21:41.177264 systemd-networkd[1431]: eth0: Gained IPv6LL Dec 16 03:21:42.072262 ignition[1684]: Ignition 2.24.0 Dec 16 03:21:42.072274 ignition[1684]: Stage: fetch-offline Dec 16 03:21:42.072411 ignition[1684]: no configs at "/usr/lib/ignition/base.d" Dec 16 03:21:42.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:42.075978 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 03:21:42.072419 ignition[1684]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 03:21:42.080893 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 03:21:42.072508 ignition[1684]: parsed url from cmdline: "" Dec 16 03:21:42.072511 ignition[1684]: no config URL provided Dec 16 03:21:42.072516 ignition[1684]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 03:21:42.072523 ignition[1684]: no config at "/usr/lib/ignition/user.ign" Dec 16 03:21:42.072528 ignition[1684]: failed to fetch config: resource requires networking Dec 16 03:21:42.074657 ignition[1684]: Ignition finished successfully Dec 16 03:21:42.106880 ignition[1690]: Ignition 2.24.0 Dec 16 03:21:42.106892 ignition[1690]: Stage: fetch Dec 16 03:21:42.107117 ignition[1690]: no configs at "/usr/lib/ignition/base.d" Dec 16 03:21:42.107841 ignition[1690]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 03:21:42.108676 ignition[1690]: parsed url from cmdline: "" Dec 16 03:21:42.108681 ignition[1690]: no config URL provided Dec 16 03:21:42.108694 ignition[1690]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 03:21:42.108700 ignition[1690]: no config at "/usr/lib/ignition/user.ign" Dec 16 03:21:42.108721 ignition[1690]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 16 03:21:42.187862 ignition[1690]: GET result: OK Dec 16 03:21:42.187947 ignition[1690]: config has been read from IMDS userdata Dec 16 03:21:42.187976 ignition[1690]: parsing config with SHA512: 4334d99e01d5dfd26c2b749f5f159c876f03d4479dee0517f9e72116b3cd49b85a774d10c4aee8031a264b496a59060eb096de0f8c4d39e44ee128dcbff85b0b Dec 16 03:21:42.193644 unknown[1690]: fetched base config from "system" Dec 16 03:21:42.193654 unknown[1690]: fetched base config from "system" Dec 16 03:21:42.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:42.193980 ignition[1690]: fetch: fetch complete Dec 16 03:21:42.193659 unknown[1690]: fetched user config from "azure" Dec 16 03:21:42.193984 ignition[1690]: fetch: fetch passed Dec 16 03:21:42.196674 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 03:21:42.194020 ignition[1690]: Ignition finished successfully Dec 16 03:21:42.199350 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 03:21:42.222914 ignition[1696]: Ignition 2.24.0 Dec 16 03:21:42.222924 ignition[1696]: Stage: kargs Dec 16 03:21:42.223923 ignition[1696]: no configs at "/usr/lib/ignition/base.d" Dec 16 03:21:42.223934 ignition[1696]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 03:21:42.229171 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 03:21:42.226418 ignition[1696]: kargs: kargs passed Dec 16 03:21:42.226447 ignition[1696]: Ignition finished successfully Dec 16 03:21:42.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:42.236937 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 03:21:42.260829 ignition[1702]: Ignition 2.24.0 Dec 16 03:21:42.260841 ignition[1702]: Stage: disks Dec 16 03:21:42.261068 ignition[1702]: no configs at "/usr/lib/ignition/base.d" Dec 16 03:21:42.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:42.262824 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 03:21:42.261076 ignition[1702]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 03:21:42.266603 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 03:21:42.261853 ignition[1702]: disks: disks passed Dec 16 03:21:42.269266 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 03:21:42.261887 ignition[1702]: Ignition finished successfully Dec 16 03:21:42.272191 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 03:21:42.273647 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 03:21:42.273669 systemd[1]: Reached target basic.target - Basic System. Dec 16 03:21:42.275272 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 03:21:42.357302 systemd-fsck[1710]: ROOT: clean, 15/6361680 files, 408771/6359552 blocks Dec 16 03:21:42.361109 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 03:21:42.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:42.365699 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 03:21:42.664158 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 1314c107-11a5-486b-9d52-be9f57b6bf1b r/w with ordered data mode. Quota mode: none. Dec 16 03:21:42.665299 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 03:21:42.667117 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 03:21:42.699420 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 03:21:42.705240 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 03:21:42.713527 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 16 03:21:42.718288 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 03:21:42.733252 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1719) Dec 16 03:21:42.733288 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:21:42.733299 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 03:21:42.718344 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 03:21:42.737644 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 03:21:42.745888 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 03:21:42.745915 kernel: BTRFS info (device nvme0n1p6): turning on async discard Dec 16 03:21:42.745933 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 03:21:42.747968 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 03:21:42.751566 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 03:21:43.241249 coreos-metadata[1721]: Dec 16 03:21:43.240 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 03:21:43.246296 coreos-metadata[1721]: Dec 16 03:21:43.245 INFO Fetch successful Dec 16 03:21:43.246296 coreos-metadata[1721]: Dec 16 03:21:43.246 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 16 03:21:43.262559 coreos-metadata[1721]: Dec 16 03:21:43.262 INFO Fetch successful Dec 16 03:21:43.277128 coreos-metadata[1721]: Dec 16 03:21:43.277 INFO wrote hostname ci-4547.0.0-a-dc3ed46bb5 to /sysroot/etc/hostname Dec 16 03:21:43.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:43.279169 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 03:21:44.843262 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 03:21:44.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:44.848803 kernel: kauditd_printk_skb: 17 callbacks suppressed Dec 16 03:21:44.848828 kernel: audit: type=1130 audit(1765855304.846:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:44.852814 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 03:21:44.858050 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 03:21:44.886788 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 03:21:44.890280 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:21:44.904039 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 03:21:44.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:44.914183 kernel: audit: type=1130 audit(1765855304.907:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:44.916999 ignition[1826]: INFO : Ignition 2.24.0 Dec 16 03:21:44.916999 ignition[1826]: INFO : Stage: mount Dec 16 03:21:44.922710 ignition[1826]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 03:21:44.922710 ignition[1826]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 03:21:44.922710 ignition[1826]: INFO : mount: mount passed Dec 16 03:21:44.922710 ignition[1826]: INFO : Ignition finished successfully Dec 16 03:21:44.927232 kernel: audit: type=1130 audit(1765855304.922:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:44.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:44.922115 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 03:21:44.926529 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 03:21:44.966566 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 03:21:44.987156 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1836) Dec 16 03:21:44.987192 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:21:44.989196 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 16 03:21:44.994559 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 03:21:44.994603 kernel: BTRFS info (device nvme0n1p6): turning on async discard Dec 16 03:21:44.995878 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 03:21:44.997763 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 03:21:45.023238 ignition[1853]: INFO : Ignition 2.24.0 Dec 16 03:21:45.023238 ignition[1853]: INFO : Stage: files Dec 16 03:21:45.027219 ignition[1853]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 03:21:45.027219 ignition[1853]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 03:21:45.027219 ignition[1853]: DEBUG : files: compiled without relabeling support, skipping Dec 16 03:21:45.053810 ignition[1853]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 03:21:45.053810 ignition[1853]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 03:21:45.136112 ignition[1853]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 03:21:45.138374 ignition[1853]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 03:21:45.140459 unknown[1853]: wrote ssh authorized keys file for user: core Dec 16 03:21:45.143202 ignition[1853]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 03:21:45.159640 ignition[1853]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 03:21:45.168233 ignition[1853]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 16 03:21:45.367223 ignition[1853]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 03:21:45.477888 ignition[1853]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 03:21:45.477888 ignition[1853]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 03:21:45.488256 ignition[1853]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 03:21:45.488256 ignition[1853]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 03:21:45.488256 ignition[1853]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 03:21:45.488256 ignition[1853]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 03:21:45.488256 ignition[1853]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 03:21:45.488256 ignition[1853]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 03:21:45.488256 ignition[1853]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 03:21:45.488256 ignition[1853]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 03:21:45.488256 ignition[1853]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 03:21:45.488256 ignition[1853]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 03:21:45.518175 ignition[1853]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 03:21:45.518175 ignition[1853]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 03:21:45.518175 ignition[1853]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Dec 16 03:21:46.020696 ignition[1853]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 03:21:46.660727 ignition[1853]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 03:21:46.660727 ignition[1853]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 03:21:46.713091 ignition[1853]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 03:21:46.718078 ignition[1853]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 03:21:46.718078 ignition[1853]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 03:21:46.725247 ignition[1853]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 03:21:46.725247 ignition[1853]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 03:21:46.725247 ignition[1853]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 03:21:46.725247 ignition[1853]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 03:21:46.725247 ignition[1853]: INFO : files: files passed Dec 16 03:21:46.725247 ignition[1853]: INFO : Ignition finished successfully Dec 16 03:21:46.749232 kernel: audit: type=1130 audit(1765855306.728:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:46.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:46.724232 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 03:21:46.738575 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 03:21:46.748222 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 03:21:46.754478 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 03:21:46.766219 kernel: audit: type=1130 audit(1765855306.759:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:46.766249 kernel: audit: type=1131 audit(1765855306.759:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:46.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:46.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:46.756238 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 03:21:46.773653 initrd-setup-root-after-ignition[1884]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 03:21:46.773653 initrd-setup-root-after-ignition[1884]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 03:21:46.779443 initrd-setup-root-after-ignition[1888]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 03:21:46.788291 kernel: audit: type=1130 audit(1765855306.779:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:46.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:46.778125 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 03:21:46.780901 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 03:21:46.790780 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 03:21:46.836937 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 03:21:46.837034 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 03:21:46.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:46.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:46.843328 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 03:21:46.854922 kernel: audit: type=1130 audit(1765855306.841:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:46.854974 kernel: audit: type=1131 audit(1765855306.841:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:46.850266 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 03:21:46.852306 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 03:21:46.853428 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 03:21:46.876785 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 03:21:46.884243 kernel: audit: type=1130 audit(1765855306.876:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:46.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:46.883492 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 03:21:46.904954 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 03:21:46.905233 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 03:21:46.909192 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 03:21:46.915321 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 03:21:46.918509 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 03:21:46.919817 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 03:21:46.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:46.926330 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 03:21:46.928961 systemd[1]: Stopped target basic.target - Basic System. Dec 16 03:21:46.932309 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 03:21:46.934059 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 03:21:46.937106 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 03:21:46.940112 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 03:21:46.942940 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 03:21:46.947370 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 03:21:46.951318 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 03:21:46.954177 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 03:21:46.956895 systemd[1]: Stopped target swap.target - Swaps. Dec 16 03:21:46.959000 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 03:21:46.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:46.959162 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 03:21:46.965245 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 03:21:46.969311 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 03:21:46.970223 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 03:21:46.970965 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 03:21:46.978271 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 03:21:46.979513 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 03:21:46.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:46.986330 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 03:21:46.987815 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 03:21:46.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:46.992783 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 03:21:46.992889 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 03:21:46.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:46.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:46.997346 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 16 03:21:46.997476 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 03:21:46.998649 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 03:21:47.006955 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 03:21:47.021439 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 03:21:47.021832 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 03:21:47.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:47.027985 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 03:21:47.028112 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 03:21:47.034675 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 03:21:47.035234 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 03:21:47.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:47.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:47.048817 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 03:21:47.050737 ignition[1908]: INFO : Ignition 2.24.0 Dec 16 03:21:47.050737 ignition[1908]: INFO : Stage: umount Dec 16 03:21:47.053645 ignition[1908]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 03:21:47.053645 ignition[1908]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 03:21:47.053645 ignition[1908]: INFO : umount: umount passed Dec 16 03:21:47.053645 ignition[1908]: INFO : Ignition finished successfully Dec 16 03:21:47.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:47.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:47.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:47.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:47.053292 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 03:21:47.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:47.057506 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 03:21:47.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:47.057593 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 03:21:47.062266 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 03:21:47.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:47.062356 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 03:21:47.066803 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 03:21:47.067601 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 03:21:47.071922 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 03:21:47.071998 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 03:21:47.075844 systemd[1]: Stopped target network.target - Network. Dec 16 03:21:47.078610 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 03:21:47.079170 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 03:21:47.082096 systemd[1]: Stopped target paths.target - Path Units. Dec 16 03:21:47.087248 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 03:21:47.087343 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 03:21:47.090447 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 03:21:47.099354 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 03:21:47.108017 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 03:21:47.108060 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 03:21:47.110740 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 03:21:47.111790 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 03:21:47.117989 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 16 03:21:47.118251 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 16 03:21:47.121542 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 03:21:47.121587 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 03:21:47.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:47.132260 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 03:21:47.132311 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 03:21:47.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:47.136487 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 03:21:47.139056 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 03:21:47.141529 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 03:21:47.146231 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 03:21:47.146331 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 03:21:47.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:47.151557 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 03:21:47.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:47.151651 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 03:21:47.158784 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 03:21:47.157000 audit: BPF prog-id=6 op=UNLOAD Dec 16 03:21:47.159000 audit: BPF prog-id=9 op=UNLOAD Dec 16 03:21:47.162232 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 03:21:47.162271 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 03:21:47.166388 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 03:21:47.171802 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 03:21:47.171864 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 03:21:47.179409 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 03:21:47.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:47.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:47.179464 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 03:21:47.182600 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 03:21:47.182652 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 03:21:47.189272 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 03:21:47.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:47.208071 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 03:21:47.209553 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 03:21:47.214487 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d741e2e eth0: Data path switched from VF: enP30832s1 Dec 16 03:21:47.217159 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Dec 16 03:21:47.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:47.218468 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 03:21:47.218508 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 03:21:47.222743 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 03:21:47.222778 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 03:21:47.225397 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 03:21:47.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:47.225532 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 03:21:47.233466 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 03:21:47.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:47.233552 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 03:21:47.239253 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 03:21:47.241804 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 03:21:47.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:47.245906 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 03:21:47.247789 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 03:21:47.247846 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 03:21:47.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:47.256230 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 03:21:47.256299 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 03:21:47.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:47.261224 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 03:21:47.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:47.261268 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 03:21:47.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:47.266233 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 03:21:47.266280 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 03:21:47.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:47.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:47.269450 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 03:21:47.269495 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:21:47.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:47.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:47.279268 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 03:21:47.279361 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 03:21:47.281729 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 03:21:47.281806 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 03:21:47.549333 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 03:21:47.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:47.549447 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 03:21:47.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:47.552620 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 03:21:47.552960 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 03:21:47.553036 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 03:21:47.556295 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 03:21:47.570153 systemd[1]: Switching root. Dec 16 03:21:47.637862 systemd-journald[1049]: Journal stopped Dec 16 03:21:51.537258 systemd-journald[1049]: Received SIGTERM from PID 1 (systemd). Dec 16 03:21:51.537296 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 03:21:51.537314 kernel: SELinux: policy capability open_perms=1 Dec 16 03:21:51.537325 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 03:21:51.537334 kernel: SELinux: policy capability always_check_network=0 Dec 16 03:21:51.537344 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 03:21:51.537355 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 03:21:51.537366 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 03:21:51.537378 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 03:21:51.537389 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 03:21:51.537399 systemd[1]: Successfully loaded SELinux policy in 139.692ms. Dec 16 03:21:51.537412 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.397ms. Dec 16 03:21:51.537424 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 03:21:51.537437 systemd[1]: Detected virtualization microsoft. Dec 16 03:21:51.537449 systemd[1]: Detected architecture x86-64. Dec 16 03:21:51.537461 systemd[1]: Detected first boot. Dec 16 03:21:51.537473 systemd[1]: Hostname set to . Dec 16 03:21:51.537486 systemd[1]: Initializing machine ID from random generator. Dec 16 03:21:51.537497 zram_generator::config[1951]: No configuration found. Dec 16 03:21:51.537510 kernel: Guest personality initialized and is inactive Dec 16 03:21:51.537521 kernel: VMCI host device registered (name=vmci, major=10, minor=259) Dec 16 03:21:51.537532 kernel: Initialized host personality Dec 16 03:21:51.537543 kernel: NET: Registered PF_VSOCK protocol family Dec 16 03:21:51.537554 systemd[1]: Populated /etc with preset unit settings. Dec 16 03:21:51.537567 kernel: kauditd_printk_skb: 45 callbacks suppressed Dec 16 03:21:51.537578 kernel: audit: type=1334 audit(1765855311.092:96): prog-id=12 op=LOAD Dec 16 03:21:51.537589 kernel: audit: type=1334 audit(1765855311.092:97): prog-id=3 op=UNLOAD Dec 16 03:21:51.537600 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 03:21:51.537611 kernel: audit: type=1334 audit(1765855311.092:98): prog-id=13 op=LOAD Dec 16 03:21:51.537622 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 03:21:51.537640 kernel: audit: type=1334 audit(1765855311.092:99): prog-id=14 op=LOAD Dec 16 03:21:51.537651 kernel: audit: type=1334 audit(1765855311.092:100): prog-id=4 op=UNLOAD Dec 16 03:21:51.537661 kernel: audit: type=1334 audit(1765855311.092:101): prog-id=5 op=UNLOAD Dec 16 03:21:51.537673 kernel: audit: type=1131 audit(1765855311.093:102): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:51.537683 kernel: audit: type=1334 audit(1765855311.109:103): prog-id=12 op=UNLOAD Dec 16 03:21:51.537694 kernel: audit: type=1130 audit(1765855311.110:104): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:51.537707 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 03:21:51.537719 kernel: audit: type=1131 audit(1765855311.110:105): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:51.537733 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 03:21:51.537745 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 03:21:51.537760 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 03:21:51.537771 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 03:21:51.537785 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 03:21:51.537797 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 03:21:51.537809 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 03:21:51.537820 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 03:21:51.537832 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 03:21:51.537843 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 03:21:51.537856 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 03:21:51.537868 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 03:21:51.537880 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 03:21:51.537892 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 03:21:51.537904 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 03:21:51.537916 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 03:21:51.537930 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 03:21:51.537942 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 03:21:51.537953 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 03:21:51.537964 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 03:21:51.537975 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 03:21:51.537986 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 03:21:51.537997 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 03:21:51.538010 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 16 03:21:51.538022 systemd[1]: Reached target slices.target - Slice Units. Dec 16 03:21:51.538098 systemd[1]: Reached target swap.target - Swaps. Dec 16 03:21:51.538110 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 03:21:51.538122 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 03:21:51.538163 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 03:21:51.538175 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 03:21:51.538187 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 16 03:21:51.538198 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 03:21:51.538211 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 16 03:21:51.538224 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 16 03:21:51.538236 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 03:21:51.538248 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 03:21:51.538259 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 03:21:51.538271 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 03:21:51.538283 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 03:21:51.538295 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 03:21:51.538308 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:21:51.538319 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 03:21:51.538331 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 03:21:51.538342 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 03:21:51.538355 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 03:21:51.538367 systemd[1]: Reached target machines.target - Containers. Dec 16 03:21:51.538380 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 03:21:51.538394 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 03:21:51.538406 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 03:21:51.538419 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 03:21:51.538430 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 03:21:51.538442 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 03:21:51.538453 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 03:21:51.538466 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 03:21:51.538477 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 03:21:51.538489 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 03:21:51.538501 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 03:21:51.538512 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 03:21:51.538524 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 03:21:51.538535 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 03:21:51.538550 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 03:21:51.538562 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 03:21:51.538573 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 03:21:51.538585 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 03:21:51.538618 systemd-journald[2037]: Collecting audit messages is enabled. Dec 16 03:21:51.538648 systemd-journald[2037]: Journal started Dec 16 03:21:51.538669 systemd-journald[2037]: Runtime Journal (/run/log/journal/52959bf305c7479fac492c1b5a57f373) is 8M, max 158.5M, 150.5M free. Dec 16 03:21:51.260000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 16 03:21:51.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:51.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:51.470000 audit: BPF prog-id=14 op=UNLOAD Dec 16 03:21:51.470000 audit: BPF prog-id=13 op=UNLOAD Dec 16 03:21:51.470000 audit: BPF prog-id=15 op=LOAD Dec 16 03:21:51.470000 audit: BPF prog-id=16 op=LOAD Dec 16 03:21:51.470000 audit: BPF prog-id=17 op=LOAD Dec 16 03:21:51.533000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 03:21:51.533000 audit[2037]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe3af2b030 a2=4000 a3=0 items=0 ppid=1 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:21:51.533000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 16 03:21:51.082434 systemd[1]: Queued start job for default target multi-user.target. Dec 16 03:21:51.543015 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 03:21:51.093883 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 16 03:21:51.094272 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 03:21:51.548808 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 03:21:51.556232 kernel: fuse: init (API version 7.41) Dec 16 03:21:51.556868 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 03:21:51.563165 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:21:51.568241 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 03:21:51.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:51.570693 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 03:21:51.575161 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 03:21:51.577220 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 03:21:51.579335 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 03:21:51.581403 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 03:21:51.583353 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 03:21:51.588424 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 03:21:51.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:51.590397 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 03:21:51.590616 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 03:21:51.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:51.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:51.594531 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 03:21:51.594651 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 03:21:51.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:51.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:51.597447 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 03:21:51.597590 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 03:21:51.601424 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 03:21:51.601570 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 03:21:51.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:51.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:51.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:51.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:51.604397 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 03:21:51.604530 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 03:21:51.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:51.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:51.607403 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 03:21:51.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:51.610802 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 03:21:51.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:51.617919 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 03:21:51.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:51.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:51.620890 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 03:21:51.633697 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 03:21:51.637476 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 16 03:21:51.642225 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 03:21:51.650229 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 03:21:51.652131 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 03:21:51.652176 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 03:21:51.658151 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 03:21:51.671899 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 03:21:51.672011 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 03:21:51.674251 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 03:21:51.678261 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 03:21:51.680612 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 03:21:51.683740 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 03:21:51.690578 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 03:21:51.695315 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 03:21:51.700322 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 03:21:51.705280 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 03:21:51.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:51.709489 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 03:21:51.713201 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 03:21:51.716928 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 03:21:51.726355 systemd-journald[2037]: Time spent on flushing to /var/log/journal/52959bf305c7479fac492c1b5a57f373 is 23.040ms for 1133 entries. Dec 16 03:21:51.726355 systemd-journald[2037]: System Journal (/var/log/journal/52959bf305c7479fac492c1b5a57f373) is 8M, max 2.2G, 2.2G free. Dec 16 03:21:51.784291 systemd-journald[2037]: Received client request to flush runtime journal. Dec 16 03:21:51.784376 kernel: ACPI: bus type drm_connector registered Dec 16 03:21:51.784404 kernel: loop1: detected capacity change from 0 to 229808 Dec 16 03:21:51.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:51.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:51.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:51.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:51.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:51.733379 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 03:21:51.733571 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 03:21:51.738439 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 03:21:51.747727 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 03:21:51.753821 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 03:21:51.758321 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 03:21:51.782099 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 03:21:51.784441 systemd-tmpfiles[2093]: ACLs are not supported, ignoring. Dec 16 03:21:51.784453 systemd-tmpfiles[2093]: ACLs are not supported, ignoring. Dec 16 03:21:51.785670 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 03:21:51.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:51.792086 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 03:21:51.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:51.795001 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 03:21:51.820684 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 03:21:51.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:51.830157 kernel: loop2: detected capacity change from 0 to 27728 Dec 16 03:21:51.927251 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 03:21:51.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:51.929000 audit: BPF prog-id=18 op=LOAD Dec 16 03:21:51.929000 audit: BPF prog-id=19 op=LOAD Dec 16 03:21:51.929000 audit: BPF prog-id=20 op=LOAD Dec 16 03:21:51.932808 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 16 03:21:51.937000 audit: BPF prog-id=21 op=LOAD Dec 16 03:21:51.940353 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 03:21:51.945297 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 03:21:51.952000 audit: BPF prog-id=22 op=LOAD Dec 16 03:21:51.956000 audit: BPF prog-id=23 op=LOAD Dec 16 03:21:51.956000 audit: BPF prog-id=24 op=LOAD Dec 16 03:21:51.960342 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 03:21:51.962000 audit: BPF prog-id=25 op=LOAD Dec 16 03:21:51.962000 audit: BPF prog-id=26 op=LOAD Dec 16 03:21:51.962000 audit: BPF prog-id=27 op=LOAD Dec 16 03:21:51.966291 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 16 03:21:51.975740 systemd-tmpfiles[2114]: ACLs are not supported, ignoring. Dec 16 03:21:51.975992 systemd-tmpfiles[2114]: ACLs are not supported, ignoring. Dec 16 03:21:51.985238 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 03:21:51.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:52.022031 systemd-nsresourced[2117]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 16 03:21:52.023223 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 16 03:21:52.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:52.037037 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 03:21:52.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:52.100355 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 03:21:52.124933 systemd-oomd[2112]: No swap; memory pressure usage will be degraded Dec 16 03:21:52.125973 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 16 03:21:52.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:52.165675 systemd-resolved[2113]: Positive Trust Anchors: Dec 16 03:21:52.165922 systemd-resolved[2113]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 03:21:52.165930 systemd-resolved[2113]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 03:21:52.165969 systemd-resolved[2113]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 03:21:52.263164 kernel: loop3: detected capacity change from 0 to 50784 Dec 16 03:21:52.276368 systemd-resolved[2113]: Using system hostname 'ci-4547.0.0-a-dc3ed46bb5'. Dec 16 03:21:52.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:52.278252 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 03:21:52.280913 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 03:21:52.380700 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 03:21:52.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:52.383000 audit: BPF prog-id=8 op=UNLOAD Dec 16 03:21:52.383000 audit: BPF prog-id=7 op=UNLOAD Dec 16 03:21:52.383000 audit: BPF prog-id=28 op=LOAD Dec 16 03:21:52.383000 audit: BPF prog-id=29 op=LOAD Dec 16 03:21:52.385688 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 03:21:52.416040 systemd-udevd[2135]: Using default interface naming scheme 'v257'. Dec 16 03:21:52.639383 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 03:21:52.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:52.643000 audit: BPF prog-id=30 op=LOAD Dec 16 03:21:52.646283 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 03:21:52.653269 kernel: loop4: detected capacity change from 0 to 111560 Dec 16 03:21:52.715899 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 03:21:52.735181 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#239 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 03:21:52.767159 kernel: hv_vmbus: registering driver hyperv_fb Dec 16 03:21:52.770574 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 16 03:21:52.770623 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 16 03:21:52.771863 kernel: Console: switching to colour dummy device 80x25 Dec 16 03:21:52.776536 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 03:21:52.778175 kernel: hv_vmbus: registering driver hv_balloon Dec 16 03:21:52.778226 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 03:21:52.785161 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 16 03:21:52.800269 systemd-networkd[2145]: lo: Link UP Dec 16 03:21:52.800511 systemd-networkd[2145]: lo: Gained carrier Dec 16 03:21:52.801862 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 03:21:52.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:52.803398 systemd[1]: Reached target network.target - Network. Dec 16 03:21:52.805104 systemd-networkd[2145]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:21:52.805109 systemd-networkd[2145]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 03:21:52.808198 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Dec 16 03:21:52.811258 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 03:21:52.815116 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 03:21:52.819181 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Dec 16 03:21:52.824626 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d741e2e eth0: Data path switched to VF: enP30832s1 Dec 16 03:21:52.824002 systemd-networkd[2145]: enP30832s1: Link UP Dec 16 03:21:52.824110 systemd-networkd[2145]: eth0: Link UP Dec 16 03:21:52.824114 systemd-networkd[2145]: eth0: Gained carrier Dec 16 03:21:52.824130 systemd-networkd[2145]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:21:52.828566 systemd-networkd[2145]: enP30832s1: Gained carrier Dec 16 03:21:52.835443 systemd-networkd[2145]: eth0: DHCPv4 address 10.200.8.23/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 16 03:21:52.868297 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:21:52.892378 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 03:21:52.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:52.905082 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 03:21:52.905348 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:21:52.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:52.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:52.914380 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:21:52.934884 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 03:21:52.936209 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:21:52.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:52.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:52.942256 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:21:53.047173 kernel: loop5: detected capacity change from 0 to 229808 Dec 16 03:21:53.089800 kernel: loop6: detected capacity change from 0 to 27728 Dec 16 03:21:53.101075 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Dec 16 03:21:53.101220 kernel: loop7: detected capacity change from 0 to 50784 Dec 16 03:21:53.106269 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 03:21:53.119868 kernel: loop1: detected capacity change from 0 to 111560 Dec 16 03:21:53.145162 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Dec 16 03:21:53.160359 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 03:21:53.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:53.163695 (sd-merge)[2221]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-azure.raw'. Dec 16 03:21:53.166595 (sd-merge)[2221]: Merged extensions into '/usr'. Dec 16 03:21:53.169930 systemd[1]: Reload requested from client PID 2091 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 03:21:53.169944 systemd[1]: Reloading... Dec 16 03:21:53.223183 zram_generator::config[2257]: No configuration found. Dec 16 03:21:53.444871 systemd[1]: Reloading finished in 274 ms. Dec 16 03:21:53.481271 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 03:21:53.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:53.484610 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:21:53.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:53.499063 systemd[1]: Starting ensure-sysext.service... Dec 16 03:21:53.501108 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 03:21:53.503000 audit: BPF prog-id=31 op=LOAD Dec 16 03:21:53.503000 audit: BPF prog-id=22 op=UNLOAD Dec 16 03:21:53.503000 audit: BPF prog-id=32 op=LOAD Dec 16 03:21:53.503000 audit: BPF prog-id=33 op=LOAD Dec 16 03:21:53.503000 audit: BPF prog-id=23 op=UNLOAD Dec 16 03:21:53.503000 audit: BPF prog-id=24 op=UNLOAD Dec 16 03:21:53.504000 audit: BPF prog-id=34 op=LOAD Dec 16 03:21:53.504000 audit: BPF prog-id=18 op=UNLOAD Dec 16 03:21:53.504000 audit: BPF prog-id=35 op=LOAD Dec 16 03:21:53.504000 audit: BPF prog-id=36 op=LOAD Dec 16 03:21:53.504000 audit: BPF prog-id=19 op=UNLOAD Dec 16 03:21:53.504000 audit: BPF prog-id=20 op=UNLOAD Dec 16 03:21:53.506000 audit: BPF prog-id=37 op=LOAD Dec 16 03:21:53.509000 audit: BPF prog-id=30 op=UNLOAD Dec 16 03:21:53.511000 audit: BPF prog-id=38 op=LOAD Dec 16 03:21:53.511000 audit: BPF prog-id=39 op=LOAD Dec 16 03:21:53.511000 audit: BPF prog-id=28 op=UNLOAD Dec 16 03:21:53.511000 audit: BPF prog-id=29 op=UNLOAD Dec 16 03:21:53.512000 audit: BPF prog-id=40 op=LOAD Dec 16 03:21:53.512000 audit: BPF prog-id=25 op=UNLOAD Dec 16 03:21:53.512000 audit: BPF prog-id=41 op=LOAD Dec 16 03:21:53.512000 audit: BPF prog-id=42 op=LOAD Dec 16 03:21:53.512000 audit: BPF prog-id=26 op=UNLOAD Dec 16 03:21:53.512000 audit: BPF prog-id=27 op=UNLOAD Dec 16 03:21:53.512000 audit: BPF prog-id=43 op=LOAD Dec 16 03:21:53.512000 audit: BPF prog-id=21 op=UNLOAD Dec 16 03:21:53.514000 audit: BPF prog-id=44 op=LOAD Dec 16 03:21:53.514000 audit: BPF prog-id=15 op=UNLOAD Dec 16 03:21:53.514000 audit: BPF prog-id=45 op=LOAD Dec 16 03:21:53.514000 audit: BPF prog-id=46 op=LOAD Dec 16 03:21:53.514000 audit: BPF prog-id=16 op=UNLOAD Dec 16 03:21:53.514000 audit: BPF prog-id=17 op=UNLOAD Dec 16 03:21:53.521462 systemd[1]: Reload requested from client PID 2317 ('systemctl') (unit ensure-sysext.service)... Dec 16 03:21:53.521553 systemd[1]: Reloading... Dec 16 03:21:53.523853 systemd-tmpfiles[2318]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 03:21:53.523882 systemd-tmpfiles[2318]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 03:21:53.524080 systemd-tmpfiles[2318]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 03:21:53.525105 systemd-tmpfiles[2318]: ACLs are not supported, ignoring. Dec 16 03:21:53.525181 systemd-tmpfiles[2318]: ACLs are not supported, ignoring. Dec 16 03:21:53.561838 systemd-tmpfiles[2318]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 03:21:53.561847 systemd-tmpfiles[2318]: Skipping /boot Dec 16 03:21:53.570275 systemd-tmpfiles[2318]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 03:21:53.570362 systemd-tmpfiles[2318]: Skipping /boot Dec 16 03:21:53.590216 zram_generator::config[2351]: No configuration found. Dec 16 03:21:53.774591 systemd[1]: Reloading finished in 252 ms. Dec 16 03:21:53.787000 audit: BPF prog-id=47 op=LOAD Dec 16 03:21:53.787000 audit: BPF prog-id=34 op=UNLOAD Dec 16 03:21:53.787000 audit: BPF prog-id=48 op=LOAD Dec 16 03:21:53.787000 audit: BPF prog-id=49 op=LOAD Dec 16 03:21:53.787000 audit: BPF prog-id=35 op=UNLOAD Dec 16 03:21:53.787000 audit: BPF prog-id=36 op=UNLOAD Dec 16 03:21:53.788000 audit: BPF prog-id=50 op=LOAD Dec 16 03:21:53.788000 audit: BPF prog-id=40 op=UNLOAD Dec 16 03:21:53.788000 audit: BPF prog-id=51 op=LOAD Dec 16 03:21:53.788000 audit: BPF prog-id=52 op=LOAD Dec 16 03:21:53.788000 audit: BPF prog-id=41 op=UNLOAD Dec 16 03:21:53.788000 audit: BPF prog-id=42 op=UNLOAD Dec 16 03:21:53.789000 audit: BPF prog-id=53 op=LOAD Dec 16 03:21:53.789000 audit: BPF prog-id=43 op=UNLOAD Dec 16 03:21:53.790000 audit: BPF prog-id=54 op=LOAD Dec 16 03:21:53.790000 audit: BPF prog-id=37 op=UNLOAD Dec 16 03:21:53.796000 audit: BPF prog-id=55 op=LOAD Dec 16 03:21:53.796000 audit: BPF prog-id=56 op=LOAD Dec 16 03:21:53.796000 audit: BPF prog-id=38 op=UNLOAD Dec 16 03:21:53.796000 audit: BPF prog-id=39 op=UNLOAD Dec 16 03:21:53.797000 audit: BPF prog-id=57 op=LOAD Dec 16 03:21:53.797000 audit: BPF prog-id=44 op=UNLOAD Dec 16 03:21:53.797000 audit: BPF prog-id=58 op=LOAD Dec 16 03:21:53.797000 audit: BPF prog-id=59 op=LOAD Dec 16 03:21:53.798000 audit: BPF prog-id=45 op=UNLOAD Dec 16 03:21:53.798000 audit: BPF prog-id=46 op=UNLOAD Dec 16 03:21:53.798000 audit: BPF prog-id=60 op=LOAD Dec 16 03:21:53.798000 audit: BPF prog-id=31 op=UNLOAD Dec 16 03:21:53.798000 audit: BPF prog-id=61 op=LOAD Dec 16 03:21:53.798000 audit: BPF prog-id=62 op=LOAD Dec 16 03:21:53.798000 audit: BPF prog-id=32 op=UNLOAD Dec 16 03:21:53.798000 audit: BPF prog-id=33 op=UNLOAD Dec 16 03:21:53.801611 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 03:21:53.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:53.809800 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 03:21:53.813342 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 03:21:53.818360 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 03:21:53.822605 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 03:21:53.827364 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 03:21:53.835100 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:21:53.836376 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 03:21:53.838368 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 03:21:53.841000 audit[2416]: SYSTEM_BOOT pid=2416 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 16 03:21:53.843238 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 03:21:53.850211 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 03:21:53.852656 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 03:21:53.852855 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 03:21:53.852971 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 03:21:53.853062 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:21:53.854352 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 03:21:53.854544 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 03:21:53.857399 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 03:21:53.857585 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 03:21:53.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:53.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:53.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:53.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:53.860949 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 03:21:53.861115 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 03:21:53.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:53.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:53.872381 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 03:21:53.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:53.879950 systemd[1]: Finished ensure-sysext.service. Dec 16 03:21:53.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:53.882710 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:21:53.882918 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 03:21:53.883711 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 03:21:53.889574 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 03:21:53.897314 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 03:21:53.904133 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 03:21:53.909392 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 03:21:53.909488 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 03:21:53.909529 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 03:21:53.909584 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 03:21:53.912365 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:21:53.912823 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 03:21:53.913172 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 03:21:53.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:53.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:53.915224 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 03:21:53.915396 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 03:21:53.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:53.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:53.918496 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 03:21:53.920356 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 03:21:53.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:53.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:53.923999 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 03:21:53.924228 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 03:21:53.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:53.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:53.928289 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 03:21:53.928338 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 03:21:53.957232 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 03:21:53.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:21:54.228000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 03:21:54.228000 audit[2453]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffeb4253590 a2=420 a3=0 items=0 ppid=2412 pid=2453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:21:54.228000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 03:21:54.229578 augenrules[2453]: No rules Dec 16 03:21:54.229961 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 03:21:54.230247 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 03:21:54.297260 systemd-networkd[2145]: eth0: Gained IPv6LL Dec 16 03:21:54.299367 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 03:21:54.301523 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 03:21:54.594122 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 03:21:54.597428 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 03:21:59.077570 ldconfig[2414]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 03:21:59.086561 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 03:21:59.089466 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 03:21:59.104285 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 03:21:59.107361 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 03:21:59.110293 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 03:21:59.112079 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 03:21:59.113666 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 03:21:59.115581 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 03:21:59.118257 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 03:21:59.119931 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 16 03:21:59.123248 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 16 03:21:59.126198 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 03:21:59.129204 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 03:21:59.129237 systemd[1]: Reached target paths.target - Path Units. Dec 16 03:21:59.132201 systemd[1]: Reached target timers.target - Timer Units. Dec 16 03:21:59.150397 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 03:21:59.153071 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 03:21:59.157950 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 03:21:59.161378 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 03:21:59.163103 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 03:21:59.177662 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 03:21:59.179494 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 03:21:59.181681 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 03:21:59.184924 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 03:21:59.188208 systemd[1]: Reached target basic.target - Basic System. Dec 16 03:21:59.189464 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 03:21:59.189485 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 03:21:59.191500 systemd[1]: Starting chronyd.service - NTP client/server... Dec 16 03:21:59.197231 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 03:21:59.207210 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 03:21:59.212097 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 03:21:59.217345 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 03:21:59.222575 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 03:21:59.227775 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 03:21:59.230185 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 03:21:59.235260 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 03:21:59.237516 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Dec 16 03:21:59.241372 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Dec 16 03:21:59.245297 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Dec 16 03:21:59.246681 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:21:59.252363 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 03:21:59.258753 KVP[2477]: KVP starting; pid is:2477 Dec 16 03:21:59.259310 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 03:21:59.263715 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 03:21:59.266572 jq[2474]: false Dec 16 03:21:59.270337 KVP[2477]: KVP LIC Version: 3.1 Dec 16 03:21:59.271264 kernel: hv_utils: KVP IC version 4.0 Dec 16 03:21:59.273852 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 03:21:59.278234 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 03:21:59.287256 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 03:21:59.291323 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 03:21:59.291779 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 03:21:59.297515 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 03:21:59.304767 google_oslogin_nss_cache[2476]: oslogin_cache_refresh[2476]: Refreshing passwd entry cache Dec 16 03:21:59.305053 oslogin_cache_refresh[2476]: Refreshing passwd entry cache Dec 16 03:21:59.306209 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 03:21:59.318036 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 03:21:59.321097 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 03:21:59.322053 extend-filesystems[2475]: Found /dev/nvme0n1p6 Dec 16 03:21:59.324192 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 03:21:59.324118 oslogin_cache_refresh[2476]: Failure getting users, quitting Dec 16 03:21:59.324395 google_oslogin_nss_cache[2476]: oslogin_cache_refresh[2476]: Failure getting users, quitting Dec 16 03:21:59.324723 google_oslogin_nss_cache[2476]: oslogin_cache_refresh[2476]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 03:21:59.324723 oslogin_cache_refresh[2476]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 03:21:59.324781 oslogin_cache_refresh[2476]: Refreshing group entry cache Dec 16 03:21:59.324814 google_oslogin_nss_cache[2476]: oslogin_cache_refresh[2476]: Refreshing group entry cache Dec 16 03:21:59.329783 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 03:21:59.332813 jq[2493]: true Dec 16 03:21:59.333376 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 03:21:59.347582 extend-filesystems[2475]: Found /dev/nvme0n1p9 Dec 16 03:21:59.353544 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 03:21:59.354208 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 03:21:59.360308 google_oslogin_nss_cache[2476]: oslogin_cache_refresh[2476]: Failure getting groups, quitting Dec 16 03:21:59.360308 google_oslogin_nss_cache[2476]: oslogin_cache_refresh[2476]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 03:21:59.359981 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 03:21:59.357891 oslogin_cache_refresh[2476]: Failure getting groups, quitting Dec 16 03:21:59.360240 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 03:21:59.357902 oslogin_cache_refresh[2476]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 03:21:59.371419 extend-filesystems[2475]: Checking size of /dev/nvme0n1p9 Dec 16 03:21:59.381307 chronyd[2466]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Dec 16 03:21:59.387613 chronyd[2466]: Timezone right/UTC failed leap second check, ignoring Dec 16 03:21:59.387946 systemd[1]: Started chronyd.service - NTP client/server. Dec 16 03:21:59.387788 chronyd[2466]: Loaded seccomp filter (level 2) Dec 16 03:21:59.395367 jq[2507]: true Dec 16 03:21:59.396508 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 03:21:59.417613 extend-filesystems[2475]: Resized partition /dev/nvme0n1p9 Dec 16 03:21:59.441233 extend-filesystems[2537]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 03:21:59.448320 update_engine[2490]: I20251216 03:21:59.447904 2490 main.cc:92] Flatcar Update Engine starting Dec 16 03:21:59.456327 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 6359552 to 6376955 blocks Dec 16 03:21:59.456372 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 6376955 Dec 16 03:21:59.462928 tar[2505]: linux-amd64/LICENSE Dec 16 03:21:59.470171 tar[2505]: linux-amd64/helm Dec 16 03:21:59.476196 extend-filesystems[2537]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 16 03:21:59.476196 extend-filesystems[2537]: old_desc_blocks = 4, new_desc_blocks = 4 Dec 16 03:21:59.476196 extend-filesystems[2537]: The filesystem on /dev/nvme0n1p9 is now 6376955 (4k) blocks long. Dec 16 03:21:59.493669 extend-filesystems[2475]: Resized filesystem in /dev/nvme0n1p9 Dec 16 03:21:59.493445 dbus-daemon[2469]: [system] SELinux support is enabled Dec 16 03:21:59.480264 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 03:21:59.480521 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 03:21:59.493647 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 03:21:59.500721 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 03:21:59.500753 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 03:21:59.503879 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 03:21:59.503908 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 03:21:59.511102 systemd[1]: Started update-engine.service - Update Engine. Dec 16 03:21:59.514593 update_engine[2490]: I20251216 03:21:59.514432 2490 update_check_scheduler.cc:74] Next update check in 8m35s Dec 16 03:21:59.516301 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 03:21:59.526738 systemd-logind[2488]: New seat seat0. Dec 16 03:21:59.547743 bash[2554]: Updated "/home/core/.ssh/authorized_keys" Dec 16 03:21:59.548542 systemd-logind[2488]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Dec 16 03:21:59.548784 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 03:21:59.551946 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 03:21:59.557645 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 03:21:59.596820 coreos-metadata[2468]: Dec 16 03:21:59.596 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 03:21:59.615866 coreos-metadata[2468]: Dec 16 03:21:59.603 INFO Fetch successful Dec 16 03:21:59.615866 coreos-metadata[2468]: Dec 16 03:21:59.603 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 16 03:21:59.615866 coreos-metadata[2468]: Dec 16 03:21:59.609 INFO Fetch successful Dec 16 03:21:59.615866 coreos-metadata[2468]: Dec 16 03:21:59.610 INFO Fetching http://168.63.129.16/machine/181d1519-362b-4604-991e-d27b4db803c4/d3186960%2D5841%2D465e%2Db8e1%2Df0d4a7337b93.%5Fci%2D4547.0.0%2Da%2Ddc3ed46bb5?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 16 03:21:59.615866 coreos-metadata[2468]: Dec 16 03:21:59.613 INFO Fetch successful Dec 16 03:21:59.615866 coreos-metadata[2468]: Dec 16 03:21:59.613 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 16 03:21:59.622431 coreos-metadata[2468]: Dec 16 03:21:59.622 INFO Fetch successful Dec 16 03:21:59.715389 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 03:21:59.721011 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 03:21:59.860257 locksmithd[2557]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 03:22:00.112815 tar[2505]: linux-amd64/README.md Dec 16 03:22:00.152787 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 03:22:00.240508 sshd_keygen[2499]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 03:22:00.268087 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 03:22:00.275400 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 03:22:00.280362 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Dec 16 03:22:00.301378 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 03:22:00.301626 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 03:22:00.307864 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 03:22:00.324420 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Dec 16 03:22:00.327462 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 03:22:00.337438 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 03:22:00.340886 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 03:22:00.343200 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 03:22:00.478998 containerd[2508]: time="2025-12-16T03:22:00Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 03:22:00.480122 containerd[2508]: time="2025-12-16T03:22:00.479920111Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 16 03:22:00.494659 containerd[2508]: time="2025-12-16T03:22:00.494505198Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.014µs" Dec 16 03:22:00.494659 containerd[2508]: time="2025-12-16T03:22:00.494539480Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 03:22:00.494659 containerd[2508]: time="2025-12-16T03:22:00.494576548Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 03:22:00.494659 containerd[2508]: time="2025-12-16T03:22:00.494587662Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 03:22:00.494804 containerd[2508]: time="2025-12-16T03:22:00.494705015Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 03:22:00.494804 containerd[2508]: time="2025-12-16T03:22:00.494717052Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 03:22:00.494804 containerd[2508]: time="2025-12-16T03:22:00.494762081Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 03:22:00.494804 containerd[2508]: time="2025-12-16T03:22:00.494772000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 03:22:00.495023 containerd[2508]: time="2025-12-16T03:22:00.494991635Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 03:22:00.495023 containerd[2508]: time="2025-12-16T03:22:00.495008626Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 03:22:00.495023 containerd[2508]: time="2025-12-16T03:22:00.495019514Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 03:22:00.495098 containerd[2508]: time="2025-12-16T03:22:00.495027229Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 03:22:00.496685 containerd[2508]: time="2025-12-16T03:22:00.495193584Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 03:22:00.496685 containerd[2508]: time="2025-12-16T03:22:00.495207716Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 03:22:00.496685 containerd[2508]: time="2025-12-16T03:22:00.495271953Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 03:22:00.496685 containerd[2508]: time="2025-12-16T03:22:00.495418407Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 03:22:00.496685 containerd[2508]: time="2025-12-16T03:22:00.495439846Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 03:22:00.496685 containerd[2508]: time="2025-12-16T03:22:00.495449831Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 03:22:00.496685 containerd[2508]: time="2025-12-16T03:22:00.495484370Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 03:22:00.498168 containerd[2508]: time="2025-12-16T03:22:00.497215023Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 03:22:00.498168 containerd[2508]: time="2025-12-16T03:22:00.497317170Z" level=info msg="metadata content store policy set" policy=shared Dec 16 03:22:00.508621 containerd[2508]: time="2025-12-16T03:22:00.508496962Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 03:22:00.508621 containerd[2508]: time="2025-12-16T03:22:00.508556064Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 03:22:00.509221 containerd[2508]: time="2025-12-16T03:22:00.509196200Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 03:22:00.509367 containerd[2508]: time="2025-12-16T03:22:00.509265149Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 03:22:00.509367 containerd[2508]: time="2025-12-16T03:22:00.509283329Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 03:22:00.509367 containerd[2508]: time="2025-12-16T03:22:00.509296216Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 03:22:00.509367 containerd[2508]: time="2025-12-16T03:22:00.509308120Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 03:22:00.509852 containerd[2508]: time="2025-12-16T03:22:00.509319748Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 03:22:00.509852 containerd[2508]: time="2025-12-16T03:22:00.509809492Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 03:22:00.509852 containerd[2508]: time="2025-12-16T03:22:00.509827300Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 03:22:00.511173 containerd[2508]: time="2025-12-16T03:22:00.509840112Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 03:22:00.511173 containerd[2508]: time="2025-12-16T03:22:00.509953787Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 03:22:00.511173 containerd[2508]: time="2025-12-16T03:22:00.509963280Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 03:22:00.511173 containerd[2508]: time="2025-12-16T03:22:00.509975611Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 03:22:00.511173 containerd[2508]: time="2025-12-16T03:22:00.510987693Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 03:22:00.511173 containerd[2508]: time="2025-12-16T03:22:00.511023714Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 03:22:00.511173 containerd[2508]: time="2025-12-16T03:22:00.511037104Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 03:22:00.511173 containerd[2508]: time="2025-12-16T03:22:00.511045106Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 03:22:00.511173 containerd[2508]: time="2025-12-16T03:22:00.511053100Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 03:22:00.511173 containerd[2508]: time="2025-12-16T03:22:00.511060377Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 03:22:00.511173 containerd[2508]: time="2025-12-16T03:22:00.511076792Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 03:22:00.511173 containerd[2508]: time="2025-12-16T03:22:00.511091633Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 03:22:00.511173 containerd[2508]: time="2025-12-16T03:22:00.511101874Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 03:22:00.511173 containerd[2508]: time="2025-12-16T03:22:00.511112599Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 03:22:00.511173 containerd[2508]: time="2025-12-16T03:22:00.511121381Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 03:22:00.511523 containerd[2508]: time="2025-12-16T03:22:00.511161835Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 03:22:00.511523 containerd[2508]: time="2025-12-16T03:22:00.511212267Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 03:22:00.511523 containerd[2508]: time="2025-12-16T03:22:00.511243594Z" level=info msg="Start snapshots syncer" Dec 16 03:22:00.511523 containerd[2508]: time="2025-12-16T03:22:00.511270118Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 03:22:00.511718 containerd[2508]: time="2025-12-16T03:22:00.511676764Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 03:22:00.511836 containerd[2508]: time="2025-12-16T03:22:00.511730398Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 03:22:00.512604 containerd[2508]: time="2025-12-16T03:22:00.512584480Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 03:22:00.512750 containerd[2508]: time="2025-12-16T03:22:00.512733660Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 03:22:00.512780 containerd[2508]: time="2025-12-16T03:22:00.512755309Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 03:22:00.512780 containerd[2508]: time="2025-12-16T03:22:00.512767340Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 03:22:00.512826 containerd[2508]: time="2025-12-16T03:22:00.512800809Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 03:22:00.512826 containerd[2508]: time="2025-12-16T03:22:00.512814414Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 03:22:00.512873 containerd[2508]: time="2025-12-16T03:22:00.512825940Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 03:22:00.512873 containerd[2508]: time="2025-12-16T03:22:00.512840980Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 03:22:00.512873 containerd[2508]: time="2025-12-16T03:22:00.512853367Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 03:22:00.512936 containerd[2508]: time="2025-12-16T03:22:00.512864654Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 03:22:00.512936 containerd[2508]: time="2025-12-16T03:22:00.512911058Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 03:22:00.512936 containerd[2508]: time="2025-12-16T03:22:00.512927909Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 03:22:00.513000 containerd[2508]: time="2025-12-16T03:22:00.512938869Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 03:22:00.513022 containerd[2508]: time="2025-12-16T03:22:00.513001688Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 03:22:00.513022 containerd[2508]: time="2025-12-16T03:22:00.513010472Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 03:22:00.513063 containerd[2508]: time="2025-12-16T03:22:00.513020607Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 03:22:00.513063 containerd[2508]: time="2025-12-16T03:22:00.513042675Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 03:22:00.513063 containerd[2508]: time="2025-12-16T03:22:00.513054929Z" level=info msg="runtime interface created" Dec 16 03:22:00.513063 containerd[2508]: time="2025-12-16T03:22:00.513060047Z" level=info msg="created NRI interface" Dec 16 03:22:00.513162 containerd[2508]: time="2025-12-16T03:22:00.513069216Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 03:22:00.513162 containerd[2508]: time="2025-12-16T03:22:00.513081589Z" level=info msg="Connect containerd service" Dec 16 03:22:00.513162 containerd[2508]: time="2025-12-16T03:22:00.513111146Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 03:22:00.514882 containerd[2508]: time="2025-12-16T03:22:00.514656116Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 03:22:00.760677 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:22:00.772663 (kubelet)[2635]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:22:01.297877 containerd[2508]: time="2025-12-16T03:22:01.296480807Z" level=info msg="Start subscribing containerd event" Dec 16 03:22:01.297877 containerd[2508]: time="2025-12-16T03:22:01.296551971Z" level=info msg="Start recovering state" Dec 16 03:22:01.297877 containerd[2508]: time="2025-12-16T03:22:01.296666658Z" level=info msg="Start event monitor" Dec 16 03:22:01.297877 containerd[2508]: time="2025-12-16T03:22:01.296680787Z" level=info msg="Start cni network conf syncer for default" Dec 16 03:22:01.297877 containerd[2508]: time="2025-12-16T03:22:01.296688265Z" level=info msg="Start streaming server" Dec 16 03:22:01.297877 containerd[2508]: time="2025-12-16T03:22:01.296697491Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 03:22:01.297877 containerd[2508]: time="2025-12-16T03:22:01.296704659Z" level=info msg="runtime interface starting up..." Dec 16 03:22:01.297877 containerd[2508]: time="2025-12-16T03:22:01.296711074Z" level=info msg="starting plugins..." Dec 16 03:22:01.297877 containerd[2508]: time="2025-12-16T03:22:01.296723387Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 03:22:01.297877 containerd[2508]: time="2025-12-16T03:22:01.297245152Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 03:22:01.297877 containerd[2508]: time="2025-12-16T03:22:01.297311217Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 03:22:01.298226 kubelet[2635]: E1216 03:22:01.297791 2635 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:22:01.297623 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 03:22:01.300231 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 03:22:01.302540 systemd[1]: Startup finished in 4.371s (kernel) + 11.057s (initrd) + 12.838s (userspace) = 28.268s. Dec 16 03:22:01.306154 containerd[2508]: time="2025-12-16T03:22:01.305484496Z" level=info msg="containerd successfully booted in 0.827000s" Dec 16 03:22:01.308225 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:22:01.308364 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:22:01.311489 systemd[1]: kubelet.service: Consumed 999ms CPU time, 266M memory peak. Dec 16 03:22:01.579660 login[2624]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:22:01.585569 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 03:22:01.586477 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 03:22:01.594622 systemd-logind[2488]: New session 1 of user core. Dec 16 03:22:01.605727 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 03:22:01.609690 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 03:22:01.619859 (systemd)[2657]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:22:01.622835 systemd-logind[2488]: New session 2 of user core. Dec 16 03:22:01.664823 login[2625]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:22:01.676377 systemd-logind[2488]: New session 3 of user core. Dec 16 03:22:01.809652 systemd[2657]: Queued start job for default target default.target. Dec 16 03:22:01.814414 systemd[2657]: Created slice app.slice - User Application Slice. Dec 16 03:22:01.814546 systemd[2657]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 16 03:22:01.814629 systemd[2657]: Reached target paths.target - Paths. Dec 16 03:22:01.814837 systemd[2657]: Reached target timers.target - Timers. Dec 16 03:22:01.817455 systemd[2657]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 03:22:01.818251 systemd[2657]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 16 03:22:01.833767 systemd[2657]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 03:22:01.835000 systemd[2657]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 16 03:22:01.835201 systemd[2657]: Reached target sockets.target - Sockets. Dec 16 03:22:01.835310 systemd[2657]: Reached target basic.target - Basic System. Dec 16 03:22:01.835417 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 03:22:01.836215 systemd[2657]: Reached target default.target - Main User Target. Dec 16 03:22:01.836248 systemd[2657]: Startup finished in 208ms. Dec 16 03:22:01.838366 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 03:22:01.838909 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 03:22:01.888348 waagent[2622]: 2025-12-16T03:22:01.888279Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Dec 16 03:22:01.889870 waagent[2622]: 2025-12-16T03:22:01.889789Z INFO Daemon Daemon OS: flatcar 4547.0.0 Dec 16 03:22:01.891033 waagent[2622]: 2025-12-16T03:22:01.890964Z INFO Daemon Daemon Python: 3.11.13 Dec 16 03:22:01.892292 waagent[2622]: 2025-12-16T03:22:01.892256Z INFO Daemon Daemon Run daemon Dec 16 03:22:01.893631 waagent[2622]: 2025-12-16T03:22:01.893590Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4547.0.0' Dec 16 03:22:01.896625 waagent[2622]: 2025-12-16T03:22:01.895189Z INFO Daemon Daemon Using waagent for provisioning Dec 16 03:22:01.896625 waagent[2622]: 2025-12-16T03:22:01.895552Z INFO Daemon Daemon Activate resource disk Dec 16 03:22:01.896625 waagent[2622]: 2025-12-16T03:22:01.895797Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 16 03:22:01.897645 waagent[2622]: 2025-12-16T03:22:01.897612Z INFO Daemon Daemon Found device: None Dec 16 03:22:01.897981 waagent[2622]: 2025-12-16T03:22:01.897959Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 16 03:22:01.898335 waagent[2622]: 2025-12-16T03:22:01.898316Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 16 03:22:01.899132 waagent[2622]: 2025-12-16T03:22:01.899099Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 03:22:01.899425 waagent[2622]: 2025-12-16T03:22:01.899403Z INFO Daemon Daemon Running default provisioning handler Dec 16 03:22:01.906329 waagent[2622]: 2025-12-16T03:22:01.906278Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Dec 16 03:22:01.907025 waagent[2622]: 2025-12-16T03:22:01.906992Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 16 03:22:01.907370 waagent[2622]: 2025-12-16T03:22:01.907347Z INFO Daemon Daemon cloud-init is enabled: False Dec 16 03:22:01.907923 waagent[2622]: 2025-12-16T03:22:01.907905Z INFO Daemon Daemon Copying ovf-env.xml Dec 16 03:22:01.991734 waagent[2622]: 2025-12-16T03:22:01.991679Z INFO Daemon Daemon Successfully mounted dvd Dec 16 03:22:02.018158 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 16 03:22:02.019241 waagent[2622]: 2025-12-16T03:22:02.018808Z INFO Daemon Daemon Detect protocol endpoint Dec 16 03:22:02.019241 waagent[2622]: 2025-12-16T03:22:02.018985Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 03:22:02.019602 waagent[2622]: 2025-12-16T03:22:02.019579Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 16 03:22:02.019876 waagent[2622]: 2025-12-16T03:22:02.019856Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 16 03:22:02.020281 waagent[2622]: 2025-12-16T03:22:02.020262Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 16 03:22:02.020498 waagent[2622]: 2025-12-16T03:22:02.020481Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 16 03:22:02.033409 waagent[2622]: 2025-12-16T03:22:02.033369Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 16 03:22:02.035640 waagent[2622]: 2025-12-16T03:22:02.033668Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 16 03:22:02.035640 waagent[2622]: 2025-12-16T03:22:02.033856Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 16 03:22:02.186923 waagent[2622]: 2025-12-16T03:22:02.186786Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 16 03:22:02.188664 waagent[2622]: 2025-12-16T03:22:02.188621Z INFO Daemon Daemon Forcing an update of the goal state. Dec 16 03:22:02.194341 waagent[2622]: 2025-12-16T03:22:02.194305Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 03:22:02.209002 waagent[2622]: 2025-12-16T03:22:02.208966Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Dec 16 03:22:02.210729 waagent[2622]: 2025-12-16T03:22:02.210693Z INFO Daemon Dec 16 03:22:02.211517 waagent[2622]: 2025-12-16T03:22:02.211448Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 768562e2-57fb-4f39-b5ee-33e6bd16ea46 eTag: 13620226858450012992 source: Fabric] Dec 16 03:22:02.214309 waagent[2622]: 2025-12-16T03:22:02.214273Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Dec 16 03:22:02.215942 waagent[2622]: 2025-12-16T03:22:02.215909Z INFO Daemon Dec 16 03:22:02.216631 waagent[2622]: 2025-12-16T03:22:02.216564Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Dec 16 03:22:02.222145 waagent[2622]: 2025-12-16T03:22:02.222117Z INFO Daemon Daemon Downloading artifacts profile blob Dec 16 03:22:02.308069 waagent[2622]: 2025-12-16T03:22:02.308016Z INFO Daemon Downloaded certificate {'thumbprint': '91A8071599611137E17CA709D7EEA88D0A17CCC4', 'hasPrivateKey': True} Dec 16 03:22:02.310546 waagent[2622]: 2025-12-16T03:22:02.310509Z INFO Daemon Fetch goal state completed Dec 16 03:22:02.327219 waagent[2622]: 2025-12-16T03:22:02.327187Z INFO Daemon Daemon Starting provisioning Dec 16 03:22:02.327864 waagent[2622]: 2025-12-16T03:22:02.327553Z INFO Daemon Daemon Handle ovf-env.xml. Dec 16 03:22:02.328470 waagent[2622]: 2025-12-16T03:22:02.328438Z INFO Daemon Daemon Set hostname [ci-4547.0.0-a-dc3ed46bb5] Dec 16 03:22:02.360539 waagent[2622]: 2025-12-16T03:22:02.360494Z INFO Daemon Daemon Publish hostname [ci-4547.0.0-a-dc3ed46bb5] Dec 16 03:22:02.365041 waagent[2622]: 2025-12-16T03:22:02.360810Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 16 03:22:02.365041 waagent[2622]: 2025-12-16T03:22:02.361092Z INFO Daemon Daemon Primary interface is [eth0] Dec 16 03:22:02.369090 systemd-networkd[2145]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:22:02.369098 systemd-networkd[2145]: eth0: Reconfiguring with /usr/lib/systemd/network/zz-default.network. Dec 16 03:22:02.369178 systemd-networkd[2145]: eth0: DHCP lease lost Dec 16 03:22:02.382478 waagent[2622]: 2025-12-16T03:22:02.382429Z INFO Daemon Daemon Create user account if not exists Dec 16 03:22:02.384670 waagent[2622]: 2025-12-16T03:22:02.383838Z INFO Daemon Daemon User core already exists, skip useradd Dec 16 03:22:02.384670 waagent[2622]: 2025-12-16T03:22:02.384196Z INFO Daemon Daemon Configure sudoer Dec 16 03:22:02.387609 waagent[2622]: 2025-12-16T03:22:02.387568Z INFO Daemon Daemon Configure sshd Dec 16 03:22:02.391238 waagent[2622]: 2025-12-16T03:22:02.391200Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Dec 16 03:22:02.392189 systemd-networkd[2145]: eth0: DHCPv4 address 10.200.8.23/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 16 03:22:02.396439 waagent[2622]: 2025-12-16T03:22:02.394700Z INFO Daemon Daemon Deploy ssh public key. Dec 16 03:22:03.496460 waagent[2622]: 2025-12-16T03:22:03.496419Z INFO Daemon Daemon Provisioning complete Dec 16 03:22:03.510575 waagent[2622]: 2025-12-16T03:22:03.510540Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 16 03:22:03.510925 waagent[2622]: 2025-12-16T03:22:03.510758Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 16 03:22:03.515569 waagent[2622]: 2025-12-16T03:22:03.510964Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Dec 16 03:22:03.616490 waagent[2715]: 2025-12-16T03:22:03.616418Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Dec 16 03:22:03.616777 waagent[2715]: 2025-12-16T03:22:03.616524Z INFO ExtHandler ExtHandler OS: flatcar 4547.0.0 Dec 16 03:22:03.616777 waagent[2715]: 2025-12-16T03:22:03.616567Z INFO ExtHandler ExtHandler Python: 3.11.13 Dec 16 03:22:03.616777 waagent[2715]: 2025-12-16T03:22:03.616608Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Dec 16 03:22:03.653947 waagent[2715]: 2025-12-16T03:22:03.653896Z INFO ExtHandler ExtHandler Distro: flatcar-4547.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Dec 16 03:22:03.654078 waagent[2715]: 2025-12-16T03:22:03.654052Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 03:22:03.654129 waagent[2715]: 2025-12-16T03:22:03.654109Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 03:22:03.661146 waagent[2715]: 2025-12-16T03:22:03.661087Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 03:22:03.672346 waagent[2715]: 2025-12-16T03:22:03.672312Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Dec 16 03:22:03.672689 waagent[2715]: 2025-12-16T03:22:03.672657Z INFO ExtHandler Dec 16 03:22:03.672731 waagent[2715]: 2025-12-16T03:22:03.672713Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 9b4c255d-afd6-4965-8c46-ab63f603f9e1 eTag: 13620226858450012992 source: Fabric] Dec 16 03:22:03.672930 waagent[2715]: 2025-12-16T03:22:03.672906Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 16 03:22:03.673303 waagent[2715]: 2025-12-16T03:22:03.673274Z INFO ExtHandler Dec 16 03:22:03.673342 waagent[2715]: 2025-12-16T03:22:03.673319Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 16 03:22:03.676759 waagent[2715]: 2025-12-16T03:22:03.676733Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 16 03:22:03.739722 waagent[2715]: 2025-12-16T03:22:03.739670Z INFO ExtHandler Downloaded certificate {'thumbprint': '91A8071599611137E17CA709D7EEA88D0A17CCC4', 'hasPrivateKey': True} Dec 16 03:22:03.740065 waagent[2715]: 2025-12-16T03:22:03.740034Z INFO ExtHandler Fetch goal state completed Dec 16 03:22:03.751739 waagent[2715]: 2025-12-16T03:22:03.751662Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.5.4 30 Sep 2025 (Library: OpenSSL 3.5.4 30 Sep 2025) Dec 16 03:22:03.755721 waagent[2715]: 2025-12-16T03:22:03.755667Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2715 Dec 16 03:22:03.755846 waagent[2715]: 2025-12-16T03:22:03.755802Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Dec 16 03:22:03.756100 waagent[2715]: 2025-12-16T03:22:03.756075Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Dec 16 03:22:03.757176 waagent[2715]: 2025-12-16T03:22:03.757101Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4547.0.0', '', 'Flatcar Container Linux by Kinvolk'] Dec 16 03:22:03.757480 waagent[2715]: 2025-12-16T03:22:03.757448Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4547.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 16 03:22:03.757589 waagent[2715]: 2025-12-16T03:22:03.757566Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 16 03:22:03.757965 waagent[2715]: 2025-12-16T03:22:03.757939Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 16 03:22:03.806020 waagent[2715]: 2025-12-16T03:22:03.805989Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 16 03:22:03.806194 waagent[2715]: 2025-12-16T03:22:03.806171Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 16 03:22:03.811862 waagent[2715]: 2025-12-16T03:22:03.811723Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 16 03:22:03.817129 systemd[1]: Reload requested from client PID 2730 ('systemctl') (unit waagent.service)... Dec 16 03:22:03.817159 systemd[1]: Reloading... Dec 16 03:22:03.907164 zram_generator::config[2772]: No configuration found. Dec 16 03:22:04.029163 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#268 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Dec 16 03:22:04.094797 systemd[1]: Reloading finished in 277 ms. Dec 16 03:22:04.112825 waagent[2715]: 2025-12-16T03:22:04.110526Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Dec 16 03:22:04.112825 waagent[2715]: 2025-12-16T03:22:04.110681Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Dec 16 03:22:04.532872 waagent[2715]: 2025-12-16T03:22:04.532753Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 16 03:22:04.533112 waagent[2715]: 2025-12-16T03:22:04.533083Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 16 03:22:04.533804 waagent[2715]: 2025-12-16T03:22:04.533753Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 16 03:22:04.534133 waagent[2715]: 2025-12-16T03:22:04.534106Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 16 03:22:04.534328 waagent[2715]: 2025-12-16T03:22:04.534304Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 03:22:04.534386 waagent[2715]: 2025-12-16T03:22:04.534364Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 03:22:04.534578 waagent[2715]: 2025-12-16T03:22:04.534555Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 16 03:22:04.534685 waagent[2715]: 2025-12-16T03:22:04.534652Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 16 03:22:04.534727 waagent[2715]: 2025-12-16T03:22:04.534695Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 16 03:22:04.534935 waagent[2715]: 2025-12-16T03:22:04.534910Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 03:22:04.534986 waagent[2715]: 2025-12-16T03:22:04.534965Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 03:22:04.535093 waagent[2715]: 2025-12-16T03:22:04.535072Z INFO EnvHandler ExtHandler Configure routes Dec 16 03:22:04.535181 waagent[2715]: 2025-12-16T03:22:04.535118Z INFO EnvHandler ExtHandler Gateway:None Dec 16 03:22:04.535227 waagent[2715]: 2025-12-16T03:22:04.535209Z INFO EnvHandler ExtHandler Routes:None Dec 16 03:22:04.536203 waagent[2715]: 2025-12-16T03:22:04.535678Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 16 03:22:04.536203 waagent[2715]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 16 03:22:04.536203 waagent[2715]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Dec 16 03:22:04.536203 waagent[2715]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 16 03:22:04.536203 waagent[2715]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 16 03:22:04.536203 waagent[2715]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 03:22:04.536203 waagent[2715]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 03:22:04.536512 waagent[2715]: 2025-12-16T03:22:04.536466Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 16 03:22:04.536542 waagent[2715]: 2025-12-16T03:22:04.536516Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 16 03:22:04.537096 waagent[2715]: 2025-12-16T03:22:04.536694Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 16 03:22:04.544667 waagent[2715]: 2025-12-16T03:22:04.544625Z INFO ExtHandler ExtHandler Dec 16 03:22:04.544729 waagent[2715]: 2025-12-16T03:22:04.544691Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 4841ebbc-d27b-480d-aeb0-6d61d7b7b6e0 correlation 4571332b-76a6-472d-b9c6-83153b276c72 created: 2025-12-16T03:21:12.361420Z] Dec 16 03:22:04.545024 waagent[2715]: 2025-12-16T03:22:04.544995Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 16 03:22:04.545581 waagent[2715]: 2025-12-16T03:22:04.545552Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Dec 16 03:22:04.573655 waagent[2715]: 2025-12-16T03:22:04.573610Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Dec 16 03:22:04.573655 waagent[2715]: Try `iptables -h' or 'iptables --help' for more information.) Dec 16 03:22:04.573952 waagent[2715]: 2025-12-16T03:22:04.573926Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: ADB29375-D3D7-43DD-9DCE-E73B06660765;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Dec 16 03:22:04.627155 waagent[2715]: 2025-12-16T03:22:04.627094Z INFO MonitorHandler ExtHandler Network interfaces: Dec 16 03:22:04.627155 waagent[2715]: Executing ['ip', '-a', '-o', 'link']: Dec 16 03:22:04.627155 waagent[2715]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 16 03:22:04.627155 waagent[2715]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:74:1e:2e brd ff:ff:ff:ff:ff:ff\ alias Network Device\ altname enx7ced8d741e2e Dec 16 03:22:04.627155 waagent[2715]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:74:1e:2e brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Dec 16 03:22:04.627155 waagent[2715]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 16 03:22:04.627155 waagent[2715]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 16 03:22:04.627155 waagent[2715]: 2: eth0 inet 10.200.8.23/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 16 03:22:04.627155 waagent[2715]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 16 03:22:04.627155 waagent[2715]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Dec 16 03:22:04.627155 waagent[2715]: 2: eth0 inet6 fe80::7eed:8dff:fe74:1e2e/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 16 03:22:04.669594 waagent[2715]: 2025-12-16T03:22:04.669543Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Dec 16 03:22:04.669594 waagent[2715]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 03:22:04.669594 waagent[2715]: pkts bytes target prot opt in out source destination Dec 16 03:22:04.669594 waagent[2715]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 03:22:04.669594 waagent[2715]: pkts bytes target prot opt in out source destination Dec 16 03:22:04.669594 waagent[2715]: Chain OUTPUT (policy ACCEPT 2 packets, 294 bytes) Dec 16 03:22:04.669594 waagent[2715]: pkts bytes target prot opt in out source destination Dec 16 03:22:04.669594 waagent[2715]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 03:22:04.669594 waagent[2715]: 5 646 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 03:22:04.669594 waagent[2715]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 03:22:04.672545 waagent[2715]: 2025-12-16T03:22:04.672499Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 16 03:22:04.672545 waagent[2715]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 03:22:04.672545 waagent[2715]: pkts bytes target prot opt in out source destination Dec 16 03:22:04.672545 waagent[2715]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 03:22:04.672545 waagent[2715]: pkts bytes target prot opt in out source destination Dec 16 03:22:04.672545 waagent[2715]: Chain OUTPUT (policy ACCEPT 2 packets, 294 bytes) Dec 16 03:22:04.672545 waagent[2715]: pkts bytes target prot opt in out source destination Dec 16 03:22:04.672545 waagent[2715]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 03:22:04.672545 waagent[2715]: 7 758 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 03:22:04.672545 waagent[2715]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 03:22:11.320091 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 03:22:11.321641 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:22:11.832286 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:22:11.838341 (kubelet)[2870]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:22:11.872923 kubelet[2870]: E1216 03:22:11.872867 2870 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:22:11.876055 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:22:11.876230 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:22:11.876579 systemd[1]: kubelet.service: Consumed 136ms CPU time, 110.4M memory peak. Dec 16 03:22:22.070332 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 03:22:22.071836 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:22:22.531418 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:22:22.537328 (kubelet)[2885]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:22:22.571274 kubelet[2885]: E1216 03:22:22.571236 2885 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:22:22.572990 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:22:22.573125 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:22:22.573504 systemd[1]: kubelet.service: Consumed 128ms CPU time, 110.2M memory peak. Dec 16 03:22:23.172569 chronyd[2466]: Selected source PHC0 Dec 16 03:22:28.471561 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 03:22:28.472686 systemd[1]: Started sshd@0-10.200.8.23:22-10.200.16.10:60050.service - OpenSSH per-connection server daemon (10.200.16.10:60050). Dec 16 03:22:29.148264 sshd[2893]: Accepted publickey for core from 10.200.16.10 port 60050 ssh2: RSA SHA256:tVTDUNW947aAkFL4niSGbJit7KfQLURL9mjv39l1lSw Dec 16 03:22:29.149425 sshd-session[2893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:22:29.153197 systemd-logind[2488]: New session 4 of user core. Dec 16 03:22:29.160302 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 03:22:29.555808 systemd[1]: Started sshd@1-10.200.8.23:22-10.200.16.10:60064.service - OpenSSH per-connection server daemon (10.200.16.10:60064). Dec 16 03:22:30.092560 sshd[2900]: Accepted publickey for core from 10.200.16.10 port 60064 ssh2: RSA SHA256:tVTDUNW947aAkFL4niSGbJit7KfQLURL9mjv39l1lSw Dec 16 03:22:30.093772 sshd-session[2900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:22:30.098331 systemd-logind[2488]: New session 5 of user core. Dec 16 03:22:30.107340 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 03:22:30.394222 sshd[2904]: Connection closed by 10.200.16.10 port 60064 Dec 16 03:22:30.394769 sshd-session[2900]: pam_unix(sshd:session): session closed for user core Dec 16 03:22:30.398251 systemd[1]: sshd@1-10.200.8.23:22-10.200.16.10:60064.service: Deactivated successfully. Dec 16 03:22:30.399844 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 03:22:30.401268 systemd-logind[2488]: Session 5 logged out. Waiting for processes to exit. Dec 16 03:22:30.402910 systemd-logind[2488]: Removed session 5. Dec 16 03:22:30.516032 systemd[1]: Started sshd@2-10.200.8.23:22-10.200.16.10:58102.service - OpenSSH per-connection server daemon (10.200.16.10:58102). Dec 16 03:22:31.057165 sshd[2910]: Accepted publickey for core from 10.200.16.10 port 58102 ssh2: RSA SHA256:tVTDUNW947aAkFL4niSGbJit7KfQLURL9mjv39l1lSw Dec 16 03:22:31.058390 sshd-session[2910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:22:31.063041 systemd-logind[2488]: New session 6 of user core. Dec 16 03:22:31.069314 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 03:22:31.356476 sshd[2914]: Connection closed by 10.200.16.10 port 58102 Dec 16 03:22:31.357054 sshd-session[2910]: pam_unix(sshd:session): session closed for user core Dec 16 03:22:31.360108 systemd[1]: sshd@2-10.200.8.23:22-10.200.16.10:58102.service: Deactivated successfully. Dec 16 03:22:31.361665 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 03:22:31.362937 systemd-logind[2488]: Session 6 logged out. Waiting for processes to exit. Dec 16 03:22:31.364004 systemd-logind[2488]: Removed session 6. Dec 16 03:22:31.466803 systemd[1]: Started sshd@3-10.200.8.23:22-10.200.16.10:58116.service - OpenSSH per-connection server daemon (10.200.16.10:58116). Dec 16 03:22:32.006311 sshd[2920]: Accepted publickey for core from 10.200.16.10 port 58116 ssh2: RSA SHA256:tVTDUNW947aAkFL4niSGbJit7KfQLURL9mjv39l1lSw Dec 16 03:22:32.007483 sshd-session[2920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:22:32.012003 systemd-logind[2488]: New session 7 of user core. Dec 16 03:22:32.018299 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 03:22:32.308983 sshd[2924]: Connection closed by 10.200.16.10 port 58116 Dec 16 03:22:32.309504 sshd-session[2920]: pam_unix(sshd:session): session closed for user core Dec 16 03:22:32.312543 systemd[1]: sshd@3-10.200.8.23:22-10.200.16.10:58116.service: Deactivated successfully. Dec 16 03:22:32.314229 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 03:22:32.316350 systemd-logind[2488]: Session 7 logged out. Waiting for processes to exit. Dec 16 03:22:32.317055 systemd-logind[2488]: Removed session 7. Dec 16 03:22:32.430948 systemd[1]: Started sshd@4-10.200.8.23:22-10.200.16.10:58124.service - OpenSSH per-connection server daemon (10.200.16.10:58124). Dec 16 03:22:32.820061 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 03:22:32.821738 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:22:32.969810 sshd[2930]: Accepted publickey for core from 10.200.16.10 port 58124 ssh2: RSA SHA256:tVTDUNW947aAkFL4niSGbJit7KfQLURL9mjv39l1lSw Dec 16 03:22:32.970973 sshd-session[2930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:22:32.975441 systemd-logind[2488]: New session 8 of user core. Dec 16 03:22:32.980309 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 03:22:33.379302 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:22:33.382751 (kubelet)[2944]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:22:33.416578 kubelet[2944]: E1216 03:22:33.416544 2944 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:22:33.418264 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:22:33.418431 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:22:33.418767 systemd[1]: kubelet.service: Consumed 127ms CPU time, 110.2M memory peak. Dec 16 03:22:33.477031 sudo[2938]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 03:22:33.477329 sudo[2938]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 03:22:33.499916 sudo[2938]: pam_unix(sudo:session): session closed for user root Dec 16 03:22:33.599450 sshd[2937]: Connection closed by 10.200.16.10 port 58124 Dec 16 03:22:33.600064 sshd-session[2930]: pam_unix(sshd:session): session closed for user core Dec 16 03:22:33.603773 systemd[1]: sshd@4-10.200.8.23:22-10.200.16.10:58124.service: Deactivated successfully. Dec 16 03:22:33.605352 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 03:22:33.606066 systemd-logind[2488]: Session 8 logged out. Waiting for processes to exit. Dec 16 03:22:33.607521 systemd-logind[2488]: Removed session 8. Dec 16 03:22:33.715841 systemd[1]: Started sshd@5-10.200.8.23:22-10.200.16.10:58140.service - OpenSSH per-connection server daemon (10.200.16.10:58140). Dec 16 03:22:34.249830 sshd[2957]: Accepted publickey for core from 10.200.16.10 port 58140 ssh2: RSA SHA256:tVTDUNW947aAkFL4niSGbJit7KfQLURL9mjv39l1lSw Dec 16 03:22:34.251054 sshd-session[2957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:22:34.255695 systemd-logind[2488]: New session 9 of user core. Dec 16 03:22:34.262334 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 03:22:34.454200 sudo[2963]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 03:22:34.454455 sudo[2963]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 03:22:34.458522 sudo[2963]: pam_unix(sudo:session): session closed for user root Dec 16 03:22:34.463359 sudo[2962]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 03:22:34.463589 sudo[2962]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 03:22:34.470033 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 03:22:34.501662 kernel: kauditd_printk_skb: 154 callbacks suppressed Dec 16 03:22:34.501732 kernel: audit: type=1305 audit(1765855354.498:256): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 03:22:34.498000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 03:22:34.501848 augenrules[2987]: No rules Dec 16 03:22:34.498000 audit[2987]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd34398730 a2=420 a3=0 items=0 ppid=2968 pid=2987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:34.502358 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 03:22:34.503115 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 03:22:34.507566 kernel: audit: type=1300 audit(1765855354.498:256): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd34398730 a2=420 a3=0 items=0 ppid=2968 pid=2987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:34.507623 kernel: audit: type=1327 audit(1765855354.498:256): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 03:22:34.498000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 03:22:34.507822 sudo[2962]: pam_unix(sudo:session): session closed for user root Dec 16 03:22:34.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:22:34.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:22:34.513037 kernel: audit: type=1130 audit(1765855354.503:257): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:22:34.513070 kernel: audit: type=1131 audit(1765855354.503:258): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:22:34.513090 kernel: audit: type=1106 audit(1765855354.506:259): pid=2962 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:22:34.506000 audit[2962]: USER_END pid=2962 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:22:34.506000 audit[2962]: CRED_DISP pid=2962 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:22:34.517915 kernel: audit: type=1104 audit(1765855354.506:260): pid=2962 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:22:34.607530 sshd[2961]: Connection closed by 10.200.16.10 port 58140 Dec 16 03:22:34.607967 sshd-session[2957]: pam_unix(sshd:session): session closed for user core Dec 16 03:22:34.607000 audit[2957]: USER_END pid=2957 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:22:34.615709 kernel: audit: type=1106 audit(1765855354.607:261): pid=2957 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:22:34.615773 kernel: audit: type=1104 audit(1765855354.608:262): pid=2957 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:22:34.608000 audit[2957]: CRED_DISP pid=2957 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:22:34.616024 systemd-logind[2488]: Session 9 logged out. Waiting for processes to exit. Dec 16 03:22:34.616519 systemd[1]: sshd@5-10.200.8.23:22-10.200.16.10:58140.service: Deactivated successfully. Dec 16 03:22:34.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.8.23:22-10.200.16.10:58140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:22:34.618694 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 03:22:34.621315 systemd-logind[2488]: Removed session 9. Dec 16 03:22:34.626242 kernel: audit: type=1131 audit(1765855354.615:263): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.8.23:22-10.200.16.10:58140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:22:34.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.23:22-10.200.16.10:58150 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:22:34.728911 systemd[1]: Started sshd@6-10.200.8.23:22-10.200.16.10:58150.service - OpenSSH per-connection server daemon (10.200.16.10:58150). Dec 16 03:22:35.269000 audit[2996]: USER_ACCT pid=2996 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:22:35.270575 sshd[2996]: Accepted publickey for core from 10.200.16.10 port 58150 ssh2: RSA SHA256:tVTDUNW947aAkFL4niSGbJit7KfQLURL9mjv39l1lSw Dec 16 03:22:35.270000 audit[2996]: CRED_ACQ pid=2996 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:22:35.270000 audit[2996]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe7a085910 a2=3 a3=0 items=0 ppid=1 pid=2996 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:35.270000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:22:35.271893 sshd-session[2996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:22:35.276196 systemd-logind[2488]: New session 10 of user core. Dec 16 03:22:35.285301 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 03:22:35.286000 audit[2996]: USER_START pid=2996 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:22:35.287000 audit[3000]: CRED_ACQ pid=3000 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:22:35.473000 audit[3001]: USER_ACCT pid=3001 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:22:35.474714 sudo[3001]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 03:22:35.474974 sudo[3001]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 03:22:35.473000 audit[3001]: CRED_REFR pid=3001 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:22:35.473000 audit[3001]: USER_START pid=3001 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:22:37.228471 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 03:22:37.242402 (dockerd)[3020]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 03:22:38.408606 dockerd[3020]: time="2025-12-16T03:22:38.408542083Z" level=info msg="Starting up" Dec 16 03:22:38.410861 dockerd[3020]: time="2025-12-16T03:22:38.410829734Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 03:22:38.420508 dockerd[3020]: time="2025-12-16T03:22:38.420464716Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 03:22:38.486483 dockerd[3020]: time="2025-12-16T03:22:38.486453712Z" level=info msg="Loading containers: start." Dec 16 03:22:38.516163 kernel: Initializing XFRM netlink socket Dec 16 03:22:38.566000 audit[3066]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=3066 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:22:38.566000 audit[3066]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffdc9ae3b00 a2=0 a3=0 items=0 ppid=3020 pid=3066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:38.566000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 03:22:38.568000 audit[3068]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=3068 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:22:38.568000 audit[3068]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc34301890 a2=0 a3=0 items=0 ppid=3020 pid=3068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:38.568000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 03:22:38.569000 audit[3070]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=3070 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:22:38.569000 audit[3070]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffca5cd4e10 a2=0 a3=0 items=0 ppid=3020 pid=3070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:38.569000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 03:22:38.571000 audit[3072]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=3072 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:22:38.571000 audit[3072]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc3bbf55f0 a2=0 a3=0 items=0 ppid=3020 pid=3072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:38.571000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 03:22:38.573000 audit[3074]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_chain pid=3074 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:22:38.573000 audit[3074]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdc1a63a30 a2=0 a3=0 items=0 ppid=3020 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:38.573000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 03:22:38.575000 audit[3076]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_chain pid=3076 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:22:38.575000 audit[3076]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe68f36740 a2=0 a3=0 items=0 ppid=3020 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:38.575000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 03:22:38.576000 audit[3078]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=3078 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:22:38.576000 audit[3078]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe18e75920 a2=0 a3=0 items=0 ppid=3020 pid=3078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:38.576000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 03:22:38.578000 audit[3080]: NETFILTER_CFG table=nat:12 family=2 entries=2 op=nft_register_chain pid=3080 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:22:38.578000 audit[3080]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fff00ae8c60 a2=0 a3=0 items=0 ppid=3020 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:38.578000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 03:22:38.620000 audit[3083]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=3083 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:22:38.620000 audit[3083]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffdc48854e0 a2=0 a3=0 items=0 ppid=3020 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:38.620000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 16 03:22:38.622000 audit[3085]: NETFILTER_CFG table=filter:14 family=2 entries=2 op=nft_register_chain pid=3085 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:22:38.622000 audit[3085]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffcaed7dd50 a2=0 a3=0 items=0 ppid=3020 pid=3085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:38.622000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 03:22:38.626000 audit[3087]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=3087 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:22:38.626000 audit[3087]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffd68ebf580 a2=0 a3=0 items=0 ppid=3020 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:38.626000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 03:22:38.628000 audit[3089]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=3089 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:22:38.628000 audit[3089]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fffce2130d0 a2=0 a3=0 items=0 ppid=3020 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:38.628000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 03:22:38.630000 audit[3091]: NETFILTER_CFG table=filter:17 family=2 entries=1 op=nft_register_rule pid=3091 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:22:38.630000 audit[3091]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffcadd94be0 a2=0 a3=0 items=0 ppid=3020 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:38.630000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 03:22:38.689000 audit[3121]: NETFILTER_CFG table=nat:18 family=10 entries=2 op=nft_register_chain pid=3121 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:22:38.689000 audit[3121]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc0c1856a0 a2=0 a3=0 items=0 ppid=3020 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:38.689000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 03:22:38.691000 audit[3123]: NETFILTER_CFG table=filter:19 family=10 entries=2 op=nft_register_chain pid=3123 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:22:38.691000 audit[3123]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffefcd97ec0 a2=0 a3=0 items=0 ppid=3020 pid=3123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:38.691000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 03:22:38.693000 audit[3125]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=3125 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:22:38.693000 audit[3125]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff21a37f80 a2=0 a3=0 items=0 ppid=3020 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:38.693000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 03:22:38.694000 audit[3127]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=3127 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:22:38.694000 audit[3127]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff4cb23920 a2=0 a3=0 items=0 ppid=3020 pid=3127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:38.694000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 03:22:38.696000 audit[3129]: NETFILTER_CFG table=filter:22 family=10 entries=1 op=nft_register_chain pid=3129 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:22:38.696000 audit[3129]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc6e0c9890 a2=0 a3=0 items=0 ppid=3020 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:38.696000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 03:22:38.698000 audit[3131]: NETFILTER_CFG table=filter:23 family=10 entries=1 op=nft_register_chain pid=3131 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:22:38.698000 audit[3131]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd51fbdbb0 a2=0 a3=0 items=0 ppid=3020 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:38.698000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 03:22:38.700000 audit[3133]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_chain pid=3133 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:22:38.700000 audit[3133]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd852d7ab0 a2=0 a3=0 items=0 ppid=3020 pid=3133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:38.700000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 03:22:38.702000 audit[3135]: NETFILTER_CFG table=nat:25 family=10 entries=2 op=nft_register_chain pid=3135 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:22:38.702000 audit[3135]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe6f890780 a2=0 a3=0 items=0 ppid=3020 pid=3135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:38.702000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 03:22:38.704000 audit[3137]: NETFILTER_CFG table=nat:26 family=10 entries=2 op=nft_register_chain pid=3137 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:22:38.704000 audit[3137]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffc99631690 a2=0 a3=0 items=0 ppid=3020 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:38.704000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 16 03:22:38.705000 audit[3139]: NETFILTER_CFG table=filter:27 family=10 entries=2 op=nft_register_chain pid=3139 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:22:38.705000 audit[3139]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc52df7e60 a2=0 a3=0 items=0 ppid=3020 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:38.705000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 03:22:38.707000 audit[3141]: NETFILTER_CFG table=filter:28 family=10 entries=1 op=nft_register_rule pid=3141 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:22:38.707000 audit[3141]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff86f26930 a2=0 a3=0 items=0 ppid=3020 pid=3141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:38.707000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 03:22:38.709000 audit[3143]: NETFILTER_CFG table=filter:29 family=10 entries=1 op=nft_register_rule pid=3143 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:22:38.709000 audit[3143]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffcb40fe5d0 a2=0 a3=0 items=0 ppid=3020 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:38.709000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 03:22:38.711000 audit[3145]: NETFILTER_CFG table=filter:30 family=10 entries=1 op=nft_register_rule pid=3145 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:22:38.711000 audit[3145]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffc252d4ed0 a2=0 a3=0 items=0 ppid=3020 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:38.711000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 03:22:38.715000 audit[3150]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_chain pid=3150 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:22:38.715000 audit[3150]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc87a21110 a2=0 a3=0 items=0 ppid=3020 pid=3150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:38.715000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 03:22:38.717000 audit[3152]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=3152 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:22:38.717000 audit[3152]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff1f1916b0 a2=0 a3=0 items=0 ppid=3020 pid=3152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:38.717000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 03:22:38.718000 audit[3154]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=3154 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:22:38.718000 audit[3154]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffdc4ddbc10 a2=0 a3=0 items=0 ppid=3020 pid=3154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:38.718000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 03:22:38.720000 audit[3156]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=3156 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:22:38.720000 audit[3156]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe3306d860 a2=0 a3=0 items=0 ppid=3020 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:38.720000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 03:22:38.722000 audit[3158]: NETFILTER_CFG table=filter:35 family=10 entries=1 op=nft_register_rule pid=3158 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:22:38.722000 audit[3158]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffdd2934770 a2=0 a3=0 items=0 ppid=3020 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:38.722000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 03:22:38.724000 audit[3160]: NETFILTER_CFG table=filter:36 family=10 entries=1 op=nft_register_rule pid=3160 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:22:38.724000 audit[3160]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fffff77a5e0 a2=0 a3=0 items=0 ppid=3020 pid=3160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:38.724000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 03:22:38.773000 audit[3165]: NETFILTER_CFG table=nat:37 family=2 entries=2 op=nft_register_chain pid=3165 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:22:38.773000 audit[3165]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffc917b8710 a2=0 a3=0 items=0 ppid=3020 pid=3165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:38.773000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 16 03:22:38.775000 audit[3167]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=3167 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:22:38.775000 audit[3167]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffe9b4b3e10 a2=0 a3=0 items=0 ppid=3020 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:38.775000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 16 03:22:38.782000 audit[3175]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=3175 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:22:38.782000 audit[3175]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffe921d3710 a2=0 a3=0 items=0 ppid=3020 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:38.782000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 16 03:22:38.786000 audit[3180]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=3180 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:22:38.786000 audit[3180]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffcf4e38980 a2=0 a3=0 items=0 ppid=3020 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:38.786000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 16 03:22:38.789000 audit[3182]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=3182 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:22:38.789000 audit[3182]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffdcba648e0 a2=0 a3=0 items=0 ppid=3020 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:38.789000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 16 03:22:38.791000 audit[3184]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=3184 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:22:38.791000 audit[3184]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffad5c3260 a2=0 a3=0 items=0 ppid=3020 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:38.791000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 16 03:22:38.793000 audit[3186]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_rule pid=3186 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:22:38.793000 audit[3186]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffdc978e4e0 a2=0 a3=0 items=0 ppid=3020 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:38.793000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 03:22:38.795000 audit[3188]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_rule pid=3188 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:22:38.795000 audit[3188]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcc07f7510 a2=0 a3=0 items=0 ppid=3020 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:38.795000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 16 03:22:38.796890 systemd-networkd[2145]: docker0: Link UP Dec 16 03:22:38.808932 dockerd[3020]: time="2025-12-16T03:22:38.808896168Z" level=info msg="Loading containers: done." Dec 16 03:22:38.858712 dockerd[3020]: time="2025-12-16T03:22:38.858657298Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 03:22:38.858841 dockerd[3020]: time="2025-12-16T03:22:38.858754314Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 03:22:38.858841 dockerd[3020]: time="2025-12-16T03:22:38.858829964Z" level=info msg="Initializing buildkit" Dec 16 03:22:38.901107 dockerd[3020]: time="2025-12-16T03:22:38.901068060Z" level=info msg="Completed buildkit initialization" Dec 16 03:22:38.908481 dockerd[3020]: time="2025-12-16T03:22:38.908436585Z" level=info msg="Daemon has completed initialization" Dec 16 03:22:38.908713 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 03:22:38.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:22:38.909485 dockerd[3020]: time="2025-12-16T03:22:38.909421131Z" level=info msg="API listen on /run/docker.sock" Dec 16 03:22:40.095406 containerd[2508]: time="2025-12-16T03:22:40.095363597Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 16 03:22:40.846295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1745782350.mount: Deactivated successfully. Dec 16 03:22:40.867165 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Dec 16 03:22:41.845628 containerd[2508]: time="2025-12-16T03:22:41.845578260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:22:41.847708 containerd[2508]: time="2025-12-16T03:22:41.847663373Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=28477208" Dec 16 03:22:41.850228 containerd[2508]: time="2025-12-16T03:22:41.850187710Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:22:41.856783 containerd[2508]: time="2025-12-16T03:22:41.856602421Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 1.76120397s" Dec 16 03:22:41.856783 containerd[2508]: time="2025-12-16T03:22:41.856643845Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Dec 16 03:22:41.856783 containerd[2508]: time="2025-12-16T03:22:41.856695616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:22:41.857521 containerd[2508]: time="2025-12-16T03:22:41.857497988Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 16 03:22:43.292692 containerd[2508]: time="2025-12-16T03:22:43.292633462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:22:43.294784 containerd[2508]: time="2025-12-16T03:22:43.294745063Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26008626" Dec 16 03:22:43.297097 containerd[2508]: time="2025-12-16T03:22:43.297056342Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:22:43.300782 containerd[2508]: time="2025-12-16T03:22:43.300467955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:22:43.301258 containerd[2508]: time="2025-12-16T03:22:43.301233086Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.443698258s" Dec 16 03:22:43.301303 containerd[2508]: time="2025-12-16T03:22:43.301268159Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Dec 16 03:22:43.302146 containerd[2508]: time="2025-12-16T03:22:43.302108741Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 16 03:22:43.570062 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 16 03:22:43.571928 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:22:43.993506 kernel: kauditd_printk_skb: 132 callbacks suppressed Dec 16 03:22:43.993604 kernel: audit: type=1130 audit(1765855363.992:314): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:22:43.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:22:43.992265 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:22:44.003376 (kubelet)[3298]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:22:44.035619 kubelet[3298]: E1216 03:22:44.035566 3298 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:22:44.037207 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:22:44.037338 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:22:44.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:22:44.037664 systemd[1]: kubelet.service: Consumed 130ms CPU time, 107.8M memory peak. Dec 16 03:22:44.041160 kernel: audit: type=1131 audit(1765855364.037:315): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:22:44.857038 containerd[2508]: time="2025-12-16T03:22:44.856985894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:22:44.858971 containerd[2508]: time="2025-12-16T03:22:44.858934436Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20153199" Dec 16 03:22:44.861360 containerd[2508]: time="2025-12-16T03:22:44.861322128Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:22:44.865263 containerd[2508]: time="2025-12-16T03:22:44.865229481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:22:44.866173 containerd[2508]: time="2025-12-16T03:22:44.865933381Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.563797445s" Dec 16 03:22:44.866173 containerd[2508]: time="2025-12-16T03:22:44.865965595Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Dec 16 03:22:44.866552 containerd[2508]: time="2025-12-16T03:22:44.866534544Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 16 03:22:45.036004 update_engine[2490]: I20251216 03:22:45.035629 2490 update_attempter.cc:509] Updating boot flags... Dec 16 03:22:45.768408 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount622536877.mount: Deactivated successfully. Dec 16 03:22:46.138124 containerd[2508]: time="2025-12-16T03:22:46.138078413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:22:46.140150 containerd[2508]: time="2025-12-16T03:22:46.140102625Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=20340589" Dec 16 03:22:46.142434 containerd[2508]: time="2025-12-16T03:22:46.142393596Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:22:46.145315 containerd[2508]: time="2025-12-16T03:22:46.145273023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:22:46.145828 containerd[2508]: time="2025-12-16T03:22:46.145595629Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.279032328s" Dec 16 03:22:46.145828 containerd[2508]: time="2025-12-16T03:22:46.145627034Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Dec 16 03:22:46.146068 containerd[2508]: time="2025-12-16T03:22:46.146051125Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 16 03:22:46.847348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4191382976.mount: Deactivated successfully. Dec 16 03:22:47.642782 containerd[2508]: time="2025-12-16T03:22:47.642726601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:22:47.645086 containerd[2508]: time="2025-12-16T03:22:47.645050559Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20128467" Dec 16 03:22:47.647469 containerd[2508]: time="2025-12-16T03:22:47.647430987Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:22:47.650794 containerd[2508]: time="2025-12-16T03:22:47.650751725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:22:47.651591 containerd[2508]: time="2025-12-16T03:22:47.651473955Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.505393621s" Dec 16 03:22:47.651591 containerd[2508]: time="2025-12-16T03:22:47.651504573Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Dec 16 03:22:47.652117 containerd[2508]: time="2025-12-16T03:22:47.652094485Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 03:22:48.175018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3678496968.mount: Deactivated successfully. Dec 16 03:22:48.189389 containerd[2508]: time="2025-12-16T03:22:48.189340215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 03:22:48.195664 containerd[2508]: time="2025-12-16T03:22:48.195505611Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 03:22:48.198321 containerd[2508]: time="2025-12-16T03:22:48.198296522Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 03:22:48.201446 containerd[2508]: time="2025-12-16T03:22:48.201418147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 03:22:48.201959 containerd[2508]: time="2025-12-16T03:22:48.201938254Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 549.810787ms" Dec 16 03:22:48.202024 containerd[2508]: time="2025-12-16T03:22:48.202013858Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 16 03:22:48.202521 containerd[2508]: time="2025-12-16T03:22:48.202497514Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 16 03:22:48.815794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1213254737.mount: Deactivated successfully. Dec 16 03:22:50.341038 containerd[2508]: time="2025-12-16T03:22:50.340988193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:22:50.342982 containerd[2508]: time="2025-12-16T03:22:50.342945783Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=46127678" Dec 16 03:22:50.345330 containerd[2508]: time="2025-12-16T03:22:50.345289860Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:22:50.348602 containerd[2508]: time="2025-12-16T03:22:50.348557073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:22:50.349478 containerd[2508]: time="2025-12-16T03:22:50.349450275Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.146928277s" Dec 16 03:22:50.349528 containerd[2508]: time="2025-12-16T03:22:50.349480343Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Dec 16 03:22:52.622471 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:22:52.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:22:52.622979 systemd[1]: kubelet.service: Consumed 130ms CPU time, 107.8M memory peak. Dec 16 03:22:52.627230 kernel: audit: type=1130 audit(1765855372.621:316): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:22:52.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:22:52.629995 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:22:52.632154 kernel: audit: type=1131 audit(1765855372.621:317): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:22:52.658774 systemd[1]: Reload requested from client PID 3480 ('systemctl') (unit session-10.scope)... Dec 16 03:22:52.658787 systemd[1]: Reloading... Dec 16 03:22:52.743168 zram_generator::config[3530]: No configuration found. Dec 16 03:22:52.949972 systemd[1]: Reloading finished in 290 ms. Dec 16 03:22:53.062943 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 03:22:53.063033 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 03:22:53.063336 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:22:53.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:22:53.063460 systemd[1]: kubelet.service: Consumed 86ms CPU time, 83.4M memory peak. Dec 16 03:22:53.068160 kernel: audit: type=1130 audit(1765855373.062:318): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:22:53.068253 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:22:53.068000 audit: BPF prog-id=87 op=LOAD Dec 16 03:22:53.072366 kernel: audit: type=1334 audit(1765855373.068:319): prog-id=87 op=LOAD Dec 16 03:22:53.068000 audit: BPF prog-id=74 op=UNLOAD Dec 16 03:22:53.077162 kernel: audit: type=1334 audit(1765855373.068:320): prog-id=74 op=UNLOAD Dec 16 03:22:53.068000 audit: BPF prog-id=88 op=LOAD Dec 16 03:22:53.081236 kernel: audit: type=1334 audit(1765855373.068:321): prog-id=88 op=LOAD Dec 16 03:22:53.081458 kernel: audit: type=1334 audit(1765855373.068:322): prog-id=89 op=LOAD Dec 16 03:22:53.068000 audit: BPF prog-id=89 op=LOAD Dec 16 03:22:53.068000 audit: BPF prog-id=75 op=UNLOAD Dec 16 03:22:53.082802 kernel: audit: type=1334 audit(1765855373.068:323): prog-id=75 op=UNLOAD Dec 16 03:22:53.068000 audit: BPF prog-id=76 op=UNLOAD Dec 16 03:22:53.084009 kernel: audit: type=1334 audit(1765855373.068:324): prog-id=76 op=UNLOAD Dec 16 03:22:53.069000 audit: BPF prog-id=90 op=LOAD Dec 16 03:22:53.085204 kernel: audit: type=1334 audit(1765855373.069:325): prog-id=90 op=LOAD Dec 16 03:22:53.069000 audit: BPF prog-id=80 op=UNLOAD Dec 16 03:22:53.070000 audit: BPF prog-id=91 op=LOAD Dec 16 03:22:53.070000 audit: BPF prog-id=70 op=UNLOAD Dec 16 03:22:53.070000 audit: BPF prog-id=92 op=LOAD Dec 16 03:22:53.070000 audit: BPF prog-id=93 op=LOAD Dec 16 03:22:53.070000 audit: BPF prog-id=71 op=UNLOAD Dec 16 03:22:53.070000 audit: BPF prog-id=72 op=UNLOAD Dec 16 03:22:53.072000 audit: BPF prog-id=94 op=LOAD Dec 16 03:22:53.072000 audit: BPF prog-id=84 op=UNLOAD Dec 16 03:22:53.072000 audit: BPF prog-id=95 op=LOAD Dec 16 03:22:53.072000 audit: BPF prog-id=96 op=LOAD Dec 16 03:22:53.072000 audit: BPF prog-id=85 op=UNLOAD Dec 16 03:22:53.072000 audit: BPF prog-id=86 op=UNLOAD Dec 16 03:22:53.073000 audit: BPF prog-id=97 op=LOAD Dec 16 03:22:53.073000 audit: BPF prog-id=69 op=UNLOAD Dec 16 03:22:53.074000 audit: BPF prog-id=98 op=LOAD Dec 16 03:22:53.074000 audit: BPF prog-id=77 op=UNLOAD Dec 16 03:22:53.074000 audit: BPF prog-id=99 op=LOAD Dec 16 03:22:53.074000 audit: BPF prog-id=100 op=LOAD Dec 16 03:22:53.074000 audit: BPF prog-id=78 op=UNLOAD Dec 16 03:22:53.074000 audit: BPF prog-id=79 op=UNLOAD Dec 16 03:22:53.075000 audit: BPF prog-id=101 op=LOAD Dec 16 03:22:53.075000 audit: BPF prog-id=73 op=UNLOAD Dec 16 03:22:53.075000 audit: BPF prog-id=102 op=LOAD Dec 16 03:22:53.075000 audit: BPF prog-id=103 op=LOAD Dec 16 03:22:53.075000 audit: BPF prog-id=67 op=UNLOAD Dec 16 03:22:53.075000 audit: BPF prog-id=68 op=UNLOAD Dec 16 03:22:53.077000 audit: BPF prog-id=104 op=LOAD Dec 16 03:22:53.077000 audit: BPF prog-id=81 op=UNLOAD Dec 16 03:22:53.077000 audit: BPF prog-id=105 op=LOAD Dec 16 03:22:53.077000 audit: BPF prog-id=106 op=LOAD Dec 16 03:22:53.077000 audit: BPF prog-id=82 op=UNLOAD Dec 16 03:22:53.077000 audit: BPF prog-id=83 op=UNLOAD Dec 16 03:22:53.583443 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:22:53.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:22:53.592370 (kubelet)[3597]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 03:22:53.628208 kubelet[3597]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 03:22:53.628208 kubelet[3597]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 03:22:53.628208 kubelet[3597]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 03:22:53.628473 kubelet[3597]: I1216 03:22:53.628264 3597 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 03:22:53.932085 kubelet[3597]: I1216 03:22:53.931997 3597 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 03:22:53.932085 kubelet[3597]: I1216 03:22:53.932020 3597 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 03:22:53.932443 kubelet[3597]: I1216 03:22:53.932271 3597 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 03:22:53.959159 kubelet[3597]: I1216 03:22:53.958752 3597 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 03:22:53.959326 kubelet[3597]: E1216 03:22:53.959311 3597 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.23:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.23:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 03:22:53.967709 kubelet[3597]: I1216 03:22:53.967684 3597 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 03:22:53.972097 kubelet[3597]: I1216 03:22:53.972077 3597 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 03:22:53.972324 kubelet[3597]: I1216 03:22:53.972297 3597 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 03:22:53.972468 kubelet[3597]: I1216 03:22:53.972324 3597 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547.0.0-a-dc3ed46bb5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 03:22:53.972594 kubelet[3597]: I1216 03:22:53.972469 3597 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 03:22:53.972594 kubelet[3597]: I1216 03:22:53.972479 3597 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 03:22:53.972594 kubelet[3597]: I1216 03:22:53.972579 3597 state_mem.go:36] "Initialized new in-memory state store" Dec 16 03:22:53.975159 kubelet[3597]: I1216 03:22:53.974962 3597 kubelet.go:480] "Attempting to sync node with API server" Dec 16 03:22:53.975159 kubelet[3597]: I1216 03:22:53.974984 3597 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 03:22:53.975159 kubelet[3597]: I1216 03:22:53.975029 3597 kubelet.go:386] "Adding apiserver pod source" Dec 16 03:22:53.976932 kubelet[3597]: I1216 03:22:53.976920 3597 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 03:22:53.982158 kubelet[3597]: I1216 03:22:53.981855 3597 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 03:22:53.982359 kubelet[3597]: I1216 03:22:53.982343 3597 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 03:22:53.982936 kubelet[3597]: W1216 03:22:53.982913 3597 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 03:22:53.986055 kubelet[3597]: I1216 03:22:53.985234 3597 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 03:22:53.986055 kubelet[3597]: I1216 03:22:53.985279 3597 server.go:1289] "Started kubelet" Dec 16 03:22:53.986055 kubelet[3597]: E1216 03:22:53.985440 3597 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547.0.0-a-dc3ed46bb5&limit=500&resourceVersion=0\": dial tcp 10.200.8.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 03:22:53.989630 kubelet[3597]: E1216 03:22:53.989610 3597 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.23:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 03:22:53.990436 kubelet[3597]: I1216 03:22:53.990130 3597 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 03:22:53.990436 kubelet[3597]: I1216 03:22:53.990436 3597 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 03:22:53.990553 kubelet[3597]: I1216 03:22:53.990533 3597 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 03:22:53.992160 kubelet[3597]: I1216 03:22:53.992120 3597 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 03:22:53.994931 kubelet[3597]: E1216 03:22:53.993718 3597 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.23:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.23:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4547.0.0-a-dc3ed46bb5.1881941b531afe89 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4547.0.0-a-dc3ed46bb5,UID:ci-4547.0.0-a-dc3ed46bb5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4547.0.0-a-dc3ed46bb5,},FirstTimestamp:2025-12-16 03:22:53.985250953 +0000 UTC m=+0.389137968,LastTimestamp:2025-12-16 03:22:53.985250953 +0000 UTC m=+0.389137968,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547.0.0-a-dc3ed46bb5,}" Dec 16 03:22:53.995598 kubelet[3597]: I1216 03:22:53.995505 3597 server.go:317] "Adding debug handlers to kubelet server" Dec 16 03:22:53.996581 kubelet[3597]: I1216 03:22:53.996565 3597 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 03:22:53.997000 audit[3612]: NETFILTER_CFG table=mangle:45 family=2 entries=2 op=nft_register_chain pid=3612 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:22:53.997000 audit[3612]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffee7bf7530 a2=0 a3=0 items=0 ppid=3597 pid=3612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:53.997000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 03:22:53.999091 kubelet[3597]: E1216 03:22:53.999076 3597 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547.0.0-a-dc3ed46bb5\" not found" Dec 16 03:22:53.999192 kubelet[3597]: I1216 03:22:53.999185 3597 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 03:22:53.999389 kubelet[3597]: I1216 03:22:53.999380 3597 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 03:22:53.999479 kubelet[3597]: I1216 03:22:53.999473 3597 reconciler.go:26] "Reconciler: start to sync state" Dec 16 03:22:53.999828 kubelet[3597]: E1216 03:22:53.999810 3597 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 03:22:54.000361 kubelet[3597]: E1216 03:22:54.000338 3597 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.0.0-a-dc3ed46bb5?timeout=10s\": dial tcp 10.200.8.23:6443: connect: connection refused" interval="200ms" Dec 16 03:22:54.000510 kubelet[3597]: E1216 03:22:54.000499 3597 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 03:22:53.999000 audit[3613]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=3613 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:22:53.999000 audit[3613]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdec8811d0 a2=0 a3=0 items=0 ppid=3597 pid=3613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:53.999000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 03:22:54.001392 kubelet[3597]: I1216 03:22:54.001373 3597 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 03:22:54.002343 kubelet[3597]: I1216 03:22:54.002330 3597 factory.go:223] Registration of the containerd container factory successfully Dec 16 03:22:54.002602 kubelet[3597]: I1216 03:22:54.002527 3597 factory.go:223] Registration of the systemd container factory successfully Dec 16 03:22:54.002000 audit[3615]: NETFILTER_CFG table=filter:47 family=2 entries=2 op=nft_register_chain pid=3615 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:22:54.002000 audit[3615]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe1b84e160 a2=0 a3=0 items=0 ppid=3597 pid=3615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.002000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 03:22:54.004000 audit[3617]: NETFILTER_CFG table=filter:48 family=2 entries=2 op=nft_register_chain pid=3617 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:22:54.004000 audit[3617]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff4bd27cb0 a2=0 a3=0 items=0 ppid=3597 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.004000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 03:22:54.029431 kubelet[3597]: I1216 03:22:54.029410 3597 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 03:22:54.029510 kubelet[3597]: I1216 03:22:54.029503 3597 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 03:22:54.029554 kubelet[3597]: I1216 03:22:54.029549 3597 state_mem.go:36] "Initialized new in-memory state store" Dec 16 03:22:54.028000 audit[3623]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=3623 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:22:54.028000 audit[3623]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc1da00cf0 a2=0 a3=0 items=0 ppid=3597 pid=3623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.028000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 16 03:22:54.030398 kubelet[3597]: I1216 03:22:54.030384 3597 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 03:22:54.030000 audit[3626]: NETFILTER_CFG table=mangle:50 family=2 entries=1 op=nft_register_chain pid=3626 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:22:54.030000 audit[3626]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd3c291790 a2=0 a3=0 items=0 ppid=3597 pid=3626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.030000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 03:22:54.030000 audit[3625]: NETFILTER_CFG table=mangle:51 family=10 entries=2 op=nft_register_chain pid=3625 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:22:54.030000 audit[3625]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff29e1c9c0 a2=0 a3=0 items=0 ppid=3597 pid=3625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.030000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 03:22:54.032311 kubelet[3597]: I1216 03:22:54.032276 3597 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 03:22:54.031000 audit[3627]: NETFILTER_CFG table=nat:52 family=2 entries=1 op=nft_register_chain pid=3627 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:22:54.031000 audit[3627]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffa3fe8d30 a2=0 a3=0 items=0 ppid=3597 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.031000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 03:22:54.033522 kubelet[3597]: I1216 03:22:54.032477 3597 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 03:22:54.033522 kubelet[3597]: I1216 03:22:54.032496 3597 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 03:22:54.033522 kubelet[3597]: I1216 03:22:54.032508 3597 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 03:22:54.033522 kubelet[3597]: E1216 03:22:54.032545 3597 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 03:22:54.032000 audit[3629]: NETFILTER_CFG table=mangle:53 family=10 entries=1 op=nft_register_chain pid=3629 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:22:54.032000 audit[3629]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffed8f9f600 a2=0 a3=0 items=0 ppid=3597 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.034321 kubelet[3597]: E1216 03:22:54.034077 3597 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.23:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 03:22:54.032000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 03:22:54.034606 kubelet[3597]: I1216 03:22:54.034597 3597 policy_none.go:49] "None policy: Start" Dec 16 03:22:54.034663 kubelet[3597]: I1216 03:22:54.034656 3597 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 03:22:54.034701 kubelet[3597]: I1216 03:22:54.034696 3597 state_mem.go:35] "Initializing new in-memory state store" Dec 16 03:22:54.033000 audit[3630]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=3630 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:22:54.033000 audit[3630]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdff421ec0 a2=0 a3=0 items=0 ppid=3597 pid=3630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.033000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 03:22:54.034000 audit[3631]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=3631 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:22:54.034000 audit[3631]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffe2a7c9c0 a2=0 a3=0 items=0 ppid=3597 pid=3631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.034000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 03:22:54.035000 audit[3632]: NETFILTER_CFG table=filter:56 family=10 entries=1 op=nft_register_chain pid=3632 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:22:54.035000 audit[3632]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffffb5c9080 a2=0 a3=0 items=0 ppid=3597 pid=3632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.035000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 03:22:54.041309 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 03:22:54.053054 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 03:22:54.056200 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 03:22:54.074756 kubelet[3597]: E1216 03:22:54.074647 3597 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 03:22:54.074872 kubelet[3597]: I1216 03:22:54.074864 3597 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 03:22:54.075305 kubelet[3597]: I1216 03:22:54.074970 3597 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 03:22:54.075474 kubelet[3597]: I1216 03:22:54.075464 3597 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 03:22:54.076129 kubelet[3597]: E1216 03:22:54.075987 3597 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 03:22:54.076287 kubelet[3597]: E1216 03:22:54.076267 3597 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4547.0.0-a-dc3ed46bb5\" not found" Dec 16 03:22:54.142500 systemd[1]: Created slice kubepods-burstable-podccf1ff68e1c470f3b0d73aaccb2cddb6.slice - libcontainer container kubepods-burstable-podccf1ff68e1c470f3b0d73aaccb2cddb6.slice. Dec 16 03:22:54.149875 kubelet[3597]: E1216 03:22:54.149859 3597 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-dc3ed46bb5\" not found" node="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:22:54.153430 systemd[1]: Created slice kubepods-burstable-pod94b4c71a81f5345c25634add4f7491fd.slice - libcontainer container kubepods-burstable-pod94b4c71a81f5345c25634add4f7491fd.slice. Dec 16 03:22:54.161349 kubelet[3597]: E1216 03:22:54.161331 3597 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-dc3ed46bb5\" not found" node="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:22:54.163316 systemd[1]: Created slice kubepods-burstable-podf08ea6db3ece4b36806738461eb12e77.slice - libcontainer container kubepods-burstable-podf08ea6db3ece4b36806738461eb12e77.slice. Dec 16 03:22:54.165010 kubelet[3597]: E1216 03:22:54.164984 3597 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-dc3ed46bb5\" not found" node="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:22:54.176985 kubelet[3597]: I1216 03:22:54.176968 3597 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:22:54.177296 kubelet[3597]: E1216 03:22:54.177277 3597 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.23:6443/api/v1/nodes\": dial tcp 10.200.8.23:6443: connect: connection refused" node="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:22:54.200751 kubelet[3597]: E1216 03:22:54.200687 3597 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.0.0-a-dc3ed46bb5?timeout=10s\": dial tcp 10.200.8.23:6443: connect: connection refused" interval="400ms" Dec 16 03:22:54.301196 kubelet[3597]: I1216 03:22:54.301127 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ccf1ff68e1c470f3b0d73aaccb2cddb6-ca-certs\") pod \"kube-apiserver-ci-4547.0.0-a-dc3ed46bb5\" (UID: \"ccf1ff68e1c470f3b0d73aaccb2cddb6\") " pod="kube-system/kube-apiserver-ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:22:54.301196 kubelet[3597]: I1216 03:22:54.301192 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/94b4c71a81f5345c25634add4f7491fd-ca-certs\") pod \"kube-controller-manager-ci-4547.0.0-a-dc3ed46bb5\" (UID: \"94b4c71a81f5345c25634add4f7491fd\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:22:54.301435 kubelet[3597]: I1216 03:22:54.301211 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/94b4c71a81f5345c25634add4f7491fd-flexvolume-dir\") pod \"kube-controller-manager-ci-4547.0.0-a-dc3ed46bb5\" (UID: \"94b4c71a81f5345c25634add4f7491fd\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:22:54.301435 kubelet[3597]: I1216 03:22:54.301253 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/94b4c71a81f5345c25634add4f7491fd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547.0.0-a-dc3ed46bb5\" (UID: \"94b4c71a81f5345c25634add4f7491fd\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:22:54.301435 kubelet[3597]: I1216 03:22:54.301274 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ccf1ff68e1c470f3b0d73aaccb2cddb6-k8s-certs\") pod \"kube-apiserver-ci-4547.0.0-a-dc3ed46bb5\" (UID: \"ccf1ff68e1c470f3b0d73aaccb2cddb6\") " pod="kube-system/kube-apiserver-ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:22:54.301435 kubelet[3597]: I1216 03:22:54.301292 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ccf1ff68e1c470f3b0d73aaccb2cddb6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547.0.0-a-dc3ed46bb5\" (UID: \"ccf1ff68e1c470f3b0d73aaccb2cddb6\") " pod="kube-system/kube-apiserver-ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:22:54.301435 kubelet[3597]: I1216 03:22:54.301309 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/94b4c71a81f5345c25634add4f7491fd-k8s-certs\") pod \"kube-controller-manager-ci-4547.0.0-a-dc3ed46bb5\" (UID: \"94b4c71a81f5345c25634add4f7491fd\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:22:54.301531 kubelet[3597]: I1216 03:22:54.301328 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/94b4c71a81f5345c25634add4f7491fd-kubeconfig\") pod \"kube-controller-manager-ci-4547.0.0-a-dc3ed46bb5\" (UID: \"94b4c71a81f5345c25634add4f7491fd\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:22:54.301531 kubelet[3597]: I1216 03:22:54.301346 3597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f08ea6db3ece4b36806738461eb12e77-kubeconfig\") pod \"kube-scheduler-ci-4547.0.0-a-dc3ed46bb5\" (UID: \"f08ea6db3ece4b36806738461eb12e77\") " pod="kube-system/kube-scheduler-ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:22:54.378647 kubelet[3597]: I1216 03:22:54.378618 3597 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:22:54.378917 kubelet[3597]: E1216 03:22:54.378899 3597 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.23:6443/api/v1/nodes\": dial tcp 10.200.8.23:6443: connect: connection refused" node="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:22:54.451303 containerd[2508]: time="2025-12-16T03:22:54.451207571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547.0.0-a-dc3ed46bb5,Uid:ccf1ff68e1c470f3b0d73aaccb2cddb6,Namespace:kube-system,Attempt:0,}" Dec 16 03:22:54.462838 containerd[2508]: time="2025-12-16T03:22:54.462802670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547.0.0-a-dc3ed46bb5,Uid:94b4c71a81f5345c25634add4f7491fd,Namespace:kube-system,Attempt:0,}" Dec 16 03:22:54.466637 containerd[2508]: time="2025-12-16T03:22:54.466610335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547.0.0-a-dc3ed46bb5,Uid:f08ea6db3ece4b36806738461eb12e77,Namespace:kube-system,Attempt:0,}" Dec 16 03:22:54.565630 containerd[2508]: time="2025-12-16T03:22:54.565579504Z" level=info msg="connecting to shim 7199eef6bf2453442552a7bf82f6fb01b36f759ae1926fb2daf7fdc7964bc5e5" address="unix:///run/containerd/s/0e27ec6f94a8bb5d085510afb8ca01aeeb565082411235400575ca9678fcde8a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:22:54.596330 systemd[1]: Started cri-containerd-7199eef6bf2453442552a7bf82f6fb01b36f759ae1926fb2daf7fdc7964bc5e5.scope - libcontainer container 7199eef6bf2453442552a7bf82f6fb01b36f759ae1926fb2daf7fdc7964bc5e5. Dec 16 03:22:54.602823 kubelet[3597]: E1216 03:22:54.601981 3597 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547.0.0-a-dc3ed46bb5?timeout=10s\": dial tcp 10.200.8.23:6443: connect: connection refused" interval="800ms" Dec 16 03:22:54.608195 containerd[2508]: time="2025-12-16T03:22:54.608125075Z" level=info msg="connecting to shim 5bca9fcb29150089edce033dc29ae822c6c5414384f9e4e43f41b09fae6f6787" address="unix:///run/containerd/s/b080db3162e93e895f70ebda74d4a88ce41bbc5d0f03dabe09dbe1fba46d32e8" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:22:54.612884 containerd[2508]: time="2025-12-16T03:22:54.612844519Z" level=info msg="connecting to shim 8c9663000155786a49b239a7a8a435c65be0710257efb973ed697f2e4676c07e" address="unix:///run/containerd/s/3fd1b793a08ad05e4907a7675e19289837905429aa7888b25cbd34b2b2faef7b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:22:54.631000 audit: BPF prog-id=107 op=LOAD Dec 16 03:22:54.633000 audit: BPF prog-id=108 op=LOAD Dec 16 03:22:54.633000 audit[3653]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=3641 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.633000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731393965656636626632343533343432353532613762663832663666 Dec 16 03:22:54.633000 audit: BPF prog-id=108 op=UNLOAD Dec 16 03:22:54.633000 audit[3653]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3641 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.633000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731393965656636626632343533343432353532613762663832663666 Dec 16 03:22:54.633000 audit: BPF prog-id=109 op=LOAD Dec 16 03:22:54.633000 audit[3653]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=3641 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.633000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731393965656636626632343533343432353532613762663832663666 Dec 16 03:22:54.633000 audit: BPF prog-id=110 op=LOAD Dec 16 03:22:54.633000 audit[3653]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=3641 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.633000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731393965656636626632343533343432353532613762663832663666 Dec 16 03:22:54.633000 audit: BPF prog-id=110 op=UNLOAD Dec 16 03:22:54.633000 audit[3653]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3641 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.633000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731393965656636626632343533343432353532613762663832663666 Dec 16 03:22:54.633000 audit: BPF prog-id=109 op=UNLOAD Dec 16 03:22:54.633000 audit[3653]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3641 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.633000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731393965656636626632343533343432353532613762663832663666 Dec 16 03:22:54.633000 audit: BPF prog-id=111 op=LOAD Dec 16 03:22:54.633000 audit[3653]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=3641 pid=3653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.633000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731393965656636626632343533343432353532613762663832663666 Dec 16 03:22:54.642489 systemd[1]: Started cri-containerd-5bca9fcb29150089edce033dc29ae822c6c5414384f9e4e43f41b09fae6f6787.scope - libcontainer container 5bca9fcb29150089edce033dc29ae822c6c5414384f9e4e43f41b09fae6f6787. Dec 16 03:22:54.655450 systemd[1]: Started cri-containerd-8c9663000155786a49b239a7a8a435c65be0710257efb973ed697f2e4676c07e.scope - libcontainer container 8c9663000155786a49b239a7a8a435c65be0710257efb973ed697f2e4676c07e. Dec 16 03:22:54.665000 audit: BPF prog-id=112 op=LOAD Dec 16 03:22:54.666000 audit: BPF prog-id=113 op=LOAD Dec 16 03:22:54.666000 audit[3712]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3673 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562636139666362323931353030383965646365303333646332396165 Dec 16 03:22:54.666000 audit: BPF prog-id=113 op=UNLOAD Dec 16 03:22:54.666000 audit[3712]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3673 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562636139666362323931353030383965646365303333646332396165 Dec 16 03:22:54.666000 audit: BPF prog-id=114 op=LOAD Dec 16 03:22:54.666000 audit[3712]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3673 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562636139666362323931353030383965646365303333646332396165 Dec 16 03:22:54.666000 audit: BPF prog-id=115 op=LOAD Dec 16 03:22:54.666000 audit[3712]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3673 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562636139666362323931353030383965646365303333646332396165 Dec 16 03:22:54.666000 audit: BPF prog-id=115 op=UNLOAD Dec 16 03:22:54.666000 audit[3712]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3673 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562636139666362323931353030383965646365303333646332396165 Dec 16 03:22:54.666000 audit: BPF prog-id=114 op=UNLOAD Dec 16 03:22:54.666000 audit[3712]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3673 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.666000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562636139666362323931353030383965646365303333646332396165 Dec 16 03:22:54.667000 audit: BPF prog-id=116 op=LOAD Dec 16 03:22:54.667000 audit[3712]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3673 pid=3712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.667000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3562636139666362323931353030383965646365303333646332396165 Dec 16 03:22:54.673000 audit: BPF prog-id=117 op=LOAD Dec 16 03:22:54.674000 audit: BPF prog-id=118 op=LOAD Dec 16 03:22:54.674000 audit[3711]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3694 pid=3711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863393636333030303135353738366134396232333961376138613433 Dec 16 03:22:54.675000 audit: BPF prog-id=118 op=UNLOAD Dec 16 03:22:54.675000 audit[3711]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3694 pid=3711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.675000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863393636333030303135353738366134396232333961376138613433 Dec 16 03:22:54.676000 audit: BPF prog-id=119 op=LOAD Dec 16 03:22:54.676000 audit[3711]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3694 pid=3711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.676000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863393636333030303135353738366134396232333961376138613433 Dec 16 03:22:54.678000 audit: BPF prog-id=120 op=LOAD Dec 16 03:22:54.678000 audit[3711]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3694 pid=3711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863393636333030303135353738366134396232333961376138613433 Dec 16 03:22:54.678000 audit: BPF prog-id=120 op=UNLOAD Dec 16 03:22:54.678000 audit[3711]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3694 pid=3711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863393636333030303135353738366134396232333961376138613433 Dec 16 03:22:54.678000 audit: BPF prog-id=119 op=UNLOAD Dec 16 03:22:54.678000 audit[3711]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3694 pid=3711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863393636333030303135353738366134396232333961376138613433 Dec 16 03:22:54.678000 audit: BPF prog-id=121 op=LOAD Dec 16 03:22:54.678000 audit[3711]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3694 pid=3711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863393636333030303135353738366134396232333961376138613433 Dec 16 03:22:54.694966 containerd[2508]: time="2025-12-16T03:22:54.694925810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547.0.0-a-dc3ed46bb5,Uid:ccf1ff68e1c470f3b0d73aaccb2cddb6,Namespace:kube-system,Attempt:0,} returns sandbox id \"7199eef6bf2453442552a7bf82f6fb01b36f759ae1926fb2daf7fdc7964bc5e5\"" Dec 16 03:22:54.704344 containerd[2508]: time="2025-12-16T03:22:54.702804722Z" level=info msg="CreateContainer within sandbox \"7199eef6bf2453442552a7bf82f6fb01b36f759ae1926fb2daf7fdc7964bc5e5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 03:22:54.715818 containerd[2508]: time="2025-12-16T03:22:54.715793371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547.0.0-a-dc3ed46bb5,Uid:94b4c71a81f5345c25634add4f7491fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"5bca9fcb29150089edce033dc29ae822c6c5414384f9e4e43f41b09fae6f6787\"" Dec 16 03:22:54.721814 containerd[2508]: time="2025-12-16T03:22:54.721760082Z" level=info msg="Container 1880070fcf9cf1a5a3997866b3be5a3e1bfbf44bf6518a979bcea7d2f0179b88: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:22:54.722444 containerd[2508]: time="2025-12-16T03:22:54.722417376Z" level=info msg="CreateContainer within sandbox \"5bca9fcb29150089edce033dc29ae822c6c5414384f9e4e43f41b09fae6f6787\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 03:22:54.741925 containerd[2508]: time="2025-12-16T03:22:54.741900219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547.0.0-a-dc3ed46bb5,Uid:f08ea6db3ece4b36806738461eb12e77,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c9663000155786a49b239a7a8a435c65be0710257efb973ed697f2e4676c07e\"" Dec 16 03:22:54.744161 containerd[2508]: time="2025-12-16T03:22:54.743425643Z" level=info msg="CreateContainer within sandbox \"7199eef6bf2453442552a7bf82f6fb01b36f759ae1926fb2daf7fdc7964bc5e5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1880070fcf9cf1a5a3997866b3be5a3e1bfbf44bf6518a979bcea7d2f0179b88\"" Dec 16 03:22:54.744392 containerd[2508]: time="2025-12-16T03:22:54.744373182Z" level=info msg="StartContainer for \"1880070fcf9cf1a5a3997866b3be5a3e1bfbf44bf6518a979bcea7d2f0179b88\"" Dec 16 03:22:54.746218 containerd[2508]: time="2025-12-16T03:22:54.746191626Z" level=info msg="connecting to shim 1880070fcf9cf1a5a3997866b3be5a3e1bfbf44bf6518a979bcea7d2f0179b88" address="unix:///run/containerd/s/0e27ec6f94a8bb5d085510afb8ca01aeeb565082411235400575ca9678fcde8a" protocol=ttrpc version=3 Dec 16 03:22:54.751716 containerd[2508]: time="2025-12-16T03:22:54.751676207Z" level=info msg="CreateContainer within sandbox \"8c9663000155786a49b239a7a8a435c65be0710257efb973ed697f2e4676c07e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 03:22:54.752672 containerd[2508]: time="2025-12-16T03:22:54.752650354Z" level=info msg="Container 39a710a4903d20d74180af5244c2b65d87e4bb5cec64d470331e45256d813784: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:22:54.767285 systemd[1]: Started cri-containerd-1880070fcf9cf1a5a3997866b3be5a3e1bfbf44bf6518a979bcea7d2f0179b88.scope - libcontainer container 1880070fcf9cf1a5a3997866b3be5a3e1bfbf44bf6518a979bcea7d2f0179b88. Dec 16 03:22:54.771023 containerd[2508]: time="2025-12-16T03:22:54.770995327Z" level=info msg="Container 1771efcc73e26d9046c3d2e4154611ee4c6a2d06b12c6bb6afc5a79c7190fa89: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:22:54.777062 containerd[2508]: time="2025-12-16T03:22:54.777021829Z" level=info msg="CreateContainer within sandbox \"5bca9fcb29150089edce033dc29ae822c6c5414384f9e4e43f41b09fae6f6787\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"39a710a4903d20d74180af5244c2b65d87e4bb5cec64d470331e45256d813784\"" Dec 16 03:22:54.776000 audit: BPF prog-id=122 op=LOAD Dec 16 03:22:54.777896 containerd[2508]: time="2025-12-16T03:22:54.777879127Z" level=info msg="StartContainer for \"39a710a4903d20d74180af5244c2b65d87e4bb5cec64d470331e45256d813784\"" Dec 16 03:22:54.777000 audit: BPF prog-id=123 op=LOAD Dec 16 03:22:54.777000 audit[3770]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=3641 pid=3770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.777000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138383030373066636639636631613561333939373836366233626535 Dec 16 03:22:54.777000 audit: BPF prog-id=123 op=UNLOAD Dec 16 03:22:54.777000 audit[3770]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3641 pid=3770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.777000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138383030373066636639636631613561333939373836366233626535 Dec 16 03:22:54.777000 audit: BPF prog-id=124 op=LOAD Dec 16 03:22:54.777000 audit[3770]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=3641 pid=3770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.777000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138383030373066636639636631613561333939373836366233626535 Dec 16 03:22:54.777000 audit: BPF prog-id=125 op=LOAD Dec 16 03:22:54.777000 audit[3770]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=3641 pid=3770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.777000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138383030373066636639636631613561333939373836366233626535 Dec 16 03:22:54.777000 audit: BPF prog-id=125 op=UNLOAD Dec 16 03:22:54.777000 audit[3770]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3641 pid=3770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.777000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138383030373066636639636631613561333939373836366233626535 Dec 16 03:22:54.777000 audit: BPF prog-id=124 op=UNLOAD Dec 16 03:22:54.777000 audit[3770]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3641 pid=3770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.777000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138383030373066636639636631613561333939373836366233626535 Dec 16 03:22:54.777000 audit: BPF prog-id=126 op=LOAD Dec 16 03:22:54.777000 audit[3770]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=3641 pid=3770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.777000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138383030373066636639636631613561333939373836366233626535 Dec 16 03:22:54.780520 containerd[2508]: time="2025-12-16T03:22:54.780454000Z" level=info msg="connecting to shim 39a710a4903d20d74180af5244c2b65d87e4bb5cec64d470331e45256d813784" address="unix:///run/containerd/s/b080db3162e93e895f70ebda74d4a88ce41bbc5d0f03dabe09dbe1fba46d32e8" protocol=ttrpc version=3 Dec 16 03:22:54.781567 kubelet[3597]: I1216 03:22:54.781227 3597 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:22:54.781567 kubelet[3597]: E1216 03:22:54.781538 3597 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.23:6443/api/v1/nodes\": dial tcp 10.200.8.23:6443: connect: connection refused" node="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:22:54.785886 containerd[2508]: time="2025-12-16T03:22:54.785858824Z" level=info msg="CreateContainer within sandbox \"8c9663000155786a49b239a7a8a435c65be0710257efb973ed697f2e4676c07e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1771efcc73e26d9046c3d2e4154611ee4c6a2d06b12c6bb6afc5a79c7190fa89\"" Dec 16 03:22:54.786633 containerd[2508]: time="2025-12-16T03:22:54.786500799Z" level=info msg="StartContainer for \"1771efcc73e26d9046c3d2e4154611ee4c6a2d06b12c6bb6afc5a79c7190fa89\"" Dec 16 03:22:54.789939 containerd[2508]: time="2025-12-16T03:22:54.789913557Z" level=info msg="connecting to shim 1771efcc73e26d9046c3d2e4154611ee4c6a2d06b12c6bb6afc5a79c7190fa89" address="unix:///run/containerd/s/3fd1b793a08ad05e4907a7675e19289837905429aa7888b25cbd34b2b2faef7b" protocol=ttrpc version=3 Dec 16 03:22:54.816456 systemd[1]: Started cri-containerd-39a710a4903d20d74180af5244c2b65d87e4bb5cec64d470331e45256d813784.scope - libcontainer container 39a710a4903d20d74180af5244c2b65d87e4bb5cec64d470331e45256d813784. Dec 16 03:22:54.820760 systemd[1]: Started cri-containerd-1771efcc73e26d9046c3d2e4154611ee4c6a2d06b12c6bb6afc5a79c7190fa89.scope - libcontainer container 1771efcc73e26d9046c3d2e4154611ee4c6a2d06b12c6bb6afc5a79c7190fa89. Dec 16 03:22:54.828714 containerd[2508]: time="2025-12-16T03:22:54.828688599Z" level=info msg="StartContainer for \"1880070fcf9cf1a5a3997866b3be5a3e1bfbf44bf6518a979bcea7d2f0179b88\" returns successfully" Dec 16 03:22:54.836000 audit: BPF prog-id=127 op=LOAD Dec 16 03:22:54.837000 audit: BPF prog-id=128 op=LOAD Dec 16 03:22:54.837000 audit[3793]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3673 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.837000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339613731306134393033643230643734313830616635323434633262 Dec 16 03:22:54.837000 audit: BPF prog-id=128 op=UNLOAD Dec 16 03:22:54.837000 audit[3793]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3673 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.837000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339613731306134393033643230643734313830616635323434633262 Dec 16 03:22:54.837000 audit: BPF prog-id=129 op=LOAD Dec 16 03:22:54.837000 audit[3793]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3673 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.837000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339613731306134393033643230643734313830616635323434633262 Dec 16 03:22:54.837000 audit: BPF prog-id=130 op=LOAD Dec 16 03:22:54.837000 audit[3793]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3673 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.837000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339613731306134393033643230643734313830616635323434633262 Dec 16 03:22:54.837000 audit: BPF prog-id=130 op=UNLOAD Dec 16 03:22:54.837000 audit[3793]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3673 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.837000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339613731306134393033643230643734313830616635323434633262 Dec 16 03:22:54.837000 audit: BPF prog-id=129 op=UNLOAD Dec 16 03:22:54.837000 audit[3793]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3673 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.837000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339613731306134393033643230643734313830616635323434633262 Dec 16 03:22:54.837000 audit: BPF prog-id=131 op=LOAD Dec 16 03:22:54.837000 audit[3793]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3673 pid=3793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.837000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339613731306134393033643230643734313830616635323434633262 Dec 16 03:22:54.852000 audit: BPF prog-id=132 op=LOAD Dec 16 03:22:54.852000 audit: BPF prog-id=133 op=LOAD Dec 16 03:22:54.852000 audit[3798]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3694 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137373165666363373365323664393034366333643265343135343631 Dec 16 03:22:54.852000 audit: BPF prog-id=133 op=UNLOAD Dec 16 03:22:54.852000 audit[3798]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3694 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137373165666363373365323664393034366333643265343135343631 Dec 16 03:22:54.852000 audit: BPF prog-id=134 op=LOAD Dec 16 03:22:54.852000 audit[3798]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3694 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137373165666363373365323664393034366333643265343135343631 Dec 16 03:22:54.852000 audit: BPF prog-id=135 op=LOAD Dec 16 03:22:54.852000 audit[3798]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3694 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137373165666363373365323664393034366333643265343135343631 Dec 16 03:22:54.852000 audit: BPF prog-id=135 op=UNLOAD Dec 16 03:22:54.852000 audit[3798]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3694 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137373165666363373365323664393034366333643265343135343631 Dec 16 03:22:54.852000 audit: BPF prog-id=134 op=UNLOAD Dec 16 03:22:54.852000 audit[3798]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3694 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137373165666363373365323664393034366333643265343135343631 Dec 16 03:22:54.853000 audit: BPF prog-id=136 op=LOAD Dec 16 03:22:54.853000 audit[3798]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3694 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:22:54.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137373165666363373365323664393034366333643265343135343631 Dec 16 03:22:54.901422 containerd[2508]: time="2025-12-16T03:22:54.900674072Z" level=info msg="StartContainer for \"39a710a4903d20d74180af5244c2b65d87e4bb5cec64d470331e45256d813784\" returns successfully" Dec 16 03:22:54.914192 containerd[2508]: time="2025-12-16T03:22:54.914168247Z" level=info msg="StartContainer for \"1771efcc73e26d9046c3d2e4154611ee4c6a2d06b12c6bb6afc5a79c7190fa89\" returns successfully" Dec 16 03:22:55.041617 kubelet[3597]: E1216 03:22:55.041533 3597 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-dc3ed46bb5\" not found" node="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:22:55.047269 kubelet[3597]: E1216 03:22:55.047246 3597 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-dc3ed46bb5\" not found" node="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:22:55.050151 kubelet[3597]: E1216 03:22:55.049261 3597 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-dc3ed46bb5\" not found" node="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:22:55.584160 kubelet[3597]: I1216 03:22:55.583667 3597 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:22:56.051652 kubelet[3597]: E1216 03:22:56.051558 3597 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-dc3ed46bb5\" not found" node="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:22:56.053290 kubelet[3597]: E1216 03:22:56.053261 3597 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-dc3ed46bb5\" not found" node="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:22:56.688552 kubelet[3597]: E1216 03:22:56.688514 3597 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4547.0.0-a-dc3ed46bb5\" not found" node="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:22:56.990607 kubelet[3597]: I1216 03:22:56.990274 3597 apiserver.go:52] "Watching apiserver" Dec 16 03:22:56.999725 kubelet[3597]: I1216 03:22:56.999698 3597 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 03:22:57.054167 kubelet[3597]: E1216 03:22:57.052959 3597 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547.0.0-a-dc3ed46bb5\" not found" node="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:22:57.059329 kubelet[3597]: I1216 03:22:57.059289 3597 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:22:57.061567 kubelet[3597]: E1216 03:22:57.061538 3597 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4547.0.0-a-dc3ed46bb5\": node \"ci-4547.0.0-a-dc3ed46bb5\" not found" Dec 16 03:22:57.061843 kubelet[3597]: E1216 03:22:57.061774 3597 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4547.0.0-a-dc3ed46bb5.1881941b531afe89 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4547.0.0-a-dc3ed46bb5,UID:ci-4547.0.0-a-dc3ed46bb5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4547.0.0-a-dc3ed46bb5,},FirstTimestamp:2025-12-16 03:22:53.985250953 +0000 UTC m=+0.389137968,LastTimestamp:2025-12-16 03:22:53.985250953 +0000 UTC m=+0.389137968,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547.0.0-a-dc3ed46bb5,}" Dec 16 03:22:57.101171 kubelet[3597]: I1216 03:22:57.101145 3597 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:22:57.478030 kubelet[3597]: E1216 03:22:57.477918 3597 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4547.0.0-a-dc3ed46bb5.1881941b540379ec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4547.0.0-a-dc3ed46bb5,UID:ci-4547.0.0-a-dc3ed46bb5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4547.0.0-a-dc3ed46bb5,},FirstTimestamp:2025-12-16 03:22:54.000486892 +0000 UTC m=+0.404373924,LastTimestamp:2025-12-16 03:22:54.000486892 +0000 UTC m=+0.404373924,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547.0.0-a-dc3ed46bb5,}" Dec 16 03:22:57.482224 kubelet[3597]: E1216 03:22:57.482178 3597 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547.0.0-a-dc3ed46bb5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:22:57.482224 kubelet[3597]: I1216 03:22:57.482207 3597 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:22:57.483493 kubelet[3597]: E1216 03:22:57.483457 3597 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4547.0.0-a-dc3ed46bb5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:22:57.483493 kubelet[3597]: I1216 03:22:57.483477 3597 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:22:57.484687 kubelet[3597]: E1216 03:22:57.484657 3597 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547.0.0-a-dc3ed46bb5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:22:57.818560 kubelet[3597]: E1216 03:22:57.818356 3597 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4547.0.0-a-dc3ed46bb5.1881941b55b2a9d7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4547.0.0-a-dc3ed46bb5,UID:ci-4547.0.0-a-dc3ed46bb5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4547.0.0-a-dc3ed46bb5 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4547.0.0-a-dc3ed46bb5,},FirstTimestamp:2025-12-16 03:22:54.028745175 +0000 UTC m=+0.432632190,LastTimestamp:2025-12-16 03:22:54.028745175 +0000 UTC m=+0.432632190,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547.0.0-a-dc3ed46bb5,}" Dec 16 03:22:57.898193 kubelet[3597]: I1216 03:22:57.898162 3597 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:22:57.899483 kubelet[3597]: E1216 03:22:57.899456 3597 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547.0.0-a-dc3ed46bb5\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:23:03.055202 kubelet[3597]: I1216 03:23:03.055168 3597 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:23:03.207656 kubelet[3597]: I1216 03:23:03.207618 3597 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 03:23:07.901990 kubelet[3597]: I1216 03:23:07.901467 3597 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:23:08.113183 kubelet[3597]: I1216 03:23:08.112690 3597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4547.0.0-a-dc3ed46bb5" podStartSLOduration=5.112650287 podStartE2EDuration="5.112650287s" podCreationTimestamp="2025-12-16 03:23:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:23:04.567813722 +0000 UTC m=+10.971700739" watchObservedRunningTime="2025-12-16 03:23:08.112650287 +0000 UTC m=+14.516537303" Dec 16 03:23:08.113183 kubelet[3597]: I1216 03:23:08.112911 3597 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 03:23:09.465177 systemd[1]: Reload requested from client PID 3878 ('systemctl') (unit session-10.scope)... Dec 16 03:23:09.465192 systemd[1]: Reloading... Dec 16 03:23:09.554481 zram_generator::config[3924]: No configuration found. Dec 16 03:23:09.769775 systemd[1]: Reloading finished in 304 ms. Dec 16 03:23:09.802920 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:23:09.822858 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 03:23:09.823125 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:23:09.827971 kernel: kauditd_printk_skb: 202 callbacks suppressed Dec 16 03:23:09.828037 kernel: audit: type=1131 audit(1765855389.822:420): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:23:09.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:23:09.823212 systemd[1]: kubelet.service: Consumed 901ms CPU time, 132.3M memory peak. Dec 16 03:23:09.828624 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:23:09.828000 audit: BPF prog-id=137 op=LOAD Dec 16 03:23:09.832414 kernel: audit: type=1334 audit(1765855389.828:421): prog-id=137 op=LOAD Dec 16 03:23:09.832475 kernel: audit: type=1334 audit(1765855389.828:422): prog-id=98 op=UNLOAD Dec 16 03:23:09.828000 audit: BPF prog-id=98 op=UNLOAD Dec 16 03:23:09.833958 kernel: audit: type=1334 audit(1765855389.828:423): prog-id=138 op=LOAD Dec 16 03:23:09.828000 audit: BPF prog-id=138 op=LOAD Dec 16 03:23:09.835396 kernel: audit: type=1334 audit(1765855389.828:424): prog-id=139 op=LOAD Dec 16 03:23:09.828000 audit: BPF prog-id=139 op=LOAD Dec 16 03:23:09.836870 kernel: audit: type=1334 audit(1765855389.828:425): prog-id=99 op=UNLOAD Dec 16 03:23:09.828000 audit: BPF prog-id=99 op=UNLOAD Dec 16 03:23:09.838245 kernel: audit: type=1334 audit(1765855389.828:426): prog-id=100 op=UNLOAD Dec 16 03:23:09.828000 audit: BPF prog-id=100 op=UNLOAD Dec 16 03:23:09.839699 kernel: audit: type=1334 audit(1765855389.835:427): prog-id=140 op=LOAD Dec 16 03:23:09.835000 audit: BPF prog-id=140 op=LOAD Dec 16 03:23:09.841088 kernel: audit: type=1334 audit(1765855389.835:428): prog-id=141 op=LOAD Dec 16 03:23:09.835000 audit: BPF prog-id=141 op=LOAD Dec 16 03:23:09.835000 audit: BPF prog-id=102 op=UNLOAD Dec 16 03:23:09.835000 audit: BPF prog-id=103 op=UNLOAD Dec 16 03:23:09.836000 audit: BPF prog-id=142 op=LOAD Dec 16 03:23:09.836000 audit: BPF prog-id=90 op=UNLOAD Dec 16 03:23:09.838000 audit: BPF prog-id=143 op=LOAD Dec 16 03:23:09.838000 audit: BPF prog-id=104 op=UNLOAD Dec 16 03:23:09.838000 audit: BPF prog-id=144 op=LOAD Dec 16 03:23:09.838000 audit: BPF prog-id=145 op=LOAD Dec 16 03:23:09.838000 audit: BPF prog-id=105 op=UNLOAD Dec 16 03:23:09.838000 audit: BPF prog-id=106 op=UNLOAD Dec 16 03:23:09.841000 audit: BPF prog-id=146 op=LOAD Dec 16 03:23:09.843172 kernel: audit: type=1334 audit(1765855389.835:429): prog-id=102 op=UNLOAD Dec 16 03:23:09.843000 audit: BPF prog-id=94 op=UNLOAD Dec 16 03:23:09.843000 audit: BPF prog-id=147 op=LOAD Dec 16 03:23:09.843000 audit: BPF prog-id=148 op=LOAD Dec 16 03:23:09.843000 audit: BPF prog-id=95 op=UNLOAD Dec 16 03:23:09.843000 audit: BPF prog-id=96 op=UNLOAD Dec 16 03:23:09.844000 audit: BPF prog-id=149 op=LOAD Dec 16 03:23:09.844000 audit: BPF prog-id=87 op=UNLOAD Dec 16 03:23:09.844000 audit: BPF prog-id=150 op=LOAD Dec 16 03:23:09.844000 audit: BPF prog-id=151 op=LOAD Dec 16 03:23:09.844000 audit: BPF prog-id=88 op=UNLOAD Dec 16 03:23:09.844000 audit: BPF prog-id=89 op=UNLOAD Dec 16 03:23:09.844000 audit: BPF prog-id=152 op=LOAD Dec 16 03:23:09.844000 audit: BPF prog-id=101 op=UNLOAD Dec 16 03:23:09.845000 audit: BPF prog-id=153 op=LOAD Dec 16 03:23:09.845000 audit: BPF prog-id=97 op=UNLOAD Dec 16 03:23:09.848000 audit: BPF prog-id=154 op=LOAD Dec 16 03:23:09.848000 audit: BPF prog-id=91 op=UNLOAD Dec 16 03:23:09.848000 audit: BPF prog-id=155 op=LOAD Dec 16 03:23:09.848000 audit: BPF prog-id=156 op=LOAD Dec 16 03:23:09.848000 audit: BPF prog-id=92 op=UNLOAD Dec 16 03:23:09.848000 audit: BPF prog-id=93 op=UNLOAD Dec 16 03:23:10.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:23:10.253417 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:23:10.262747 (kubelet)[3995]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 03:23:10.294737 kubelet[3995]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 03:23:10.295815 kubelet[3995]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 03:23:10.295815 kubelet[3995]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 03:23:10.295815 kubelet[3995]: I1216 03:23:10.295105 3995 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 03:23:10.300621 kubelet[3995]: I1216 03:23:10.300602 3995 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 03:23:10.300621 kubelet[3995]: I1216 03:23:10.300620 3995 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 03:23:10.300824 kubelet[3995]: I1216 03:23:10.300806 3995 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 03:23:10.302194 kubelet[3995]: I1216 03:23:10.301826 3995 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 03:23:10.304918 kubelet[3995]: I1216 03:23:10.304406 3995 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 03:23:10.307727 kubelet[3995]: I1216 03:23:10.307711 3995 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 03:23:10.310715 kubelet[3995]: I1216 03:23:10.310681 3995 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 03:23:10.310890 kubelet[3995]: I1216 03:23:10.310860 3995 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 03:23:10.311129 kubelet[3995]: I1216 03:23:10.310895 3995 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547.0.0-a-dc3ed46bb5","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 03:23:10.311253 kubelet[3995]: I1216 03:23:10.311153 3995 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 03:23:10.311253 kubelet[3995]: I1216 03:23:10.311164 3995 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 03:23:10.311253 kubelet[3995]: I1216 03:23:10.311204 3995 state_mem.go:36] "Initialized new in-memory state store" Dec 16 03:23:10.311388 kubelet[3995]: I1216 03:23:10.311372 3995 kubelet.go:480] "Attempting to sync node with API server" Dec 16 03:23:10.311414 kubelet[3995]: I1216 03:23:10.311388 3995 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 03:23:10.311414 kubelet[3995]: I1216 03:23:10.311411 3995 kubelet.go:386] "Adding apiserver pod source" Dec 16 03:23:10.311514 kubelet[3995]: I1216 03:23:10.311425 3995 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 03:23:10.314799 kubelet[3995]: I1216 03:23:10.314706 3995 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 03:23:10.315207 kubelet[3995]: I1216 03:23:10.315194 3995 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 03:23:10.317493 kubelet[3995]: I1216 03:23:10.317479 3995 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 03:23:10.317566 kubelet[3995]: I1216 03:23:10.317521 3995 server.go:1289] "Started kubelet" Dec 16 03:23:10.320167 kubelet[3995]: I1216 03:23:10.319900 3995 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 03:23:10.320786 kubelet[3995]: I1216 03:23:10.320772 3995 server.go:317] "Adding debug handlers to kubelet server" Dec 16 03:23:10.325186 kubelet[3995]: I1216 03:23:10.324660 3995 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 03:23:10.325186 kubelet[3995]: I1216 03:23:10.324849 3995 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 03:23:10.326453 kubelet[3995]: E1216 03:23:10.326433 3995 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 03:23:10.327883 kubelet[3995]: I1216 03:23:10.327290 3995 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 03:23:10.328081 kubelet[3995]: I1216 03:23:10.327967 3995 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 03:23:10.338065 kubelet[3995]: I1216 03:23:10.338040 3995 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 03:23:10.338251 kubelet[3995]: E1216 03:23:10.338234 3995 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547.0.0-a-dc3ed46bb5\" not found" Dec 16 03:23:10.340256 kubelet[3995]: I1216 03:23:10.340238 3995 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 03:23:10.340352 kubelet[3995]: I1216 03:23:10.340342 3995 reconciler.go:26] "Reconciler: start to sync state" Dec 16 03:23:10.342054 kubelet[3995]: I1216 03:23:10.342030 3995 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 03:23:10.345203 kubelet[3995]: I1216 03:23:10.344581 3995 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 03:23:10.345203 kubelet[3995]: I1216 03:23:10.344602 3995 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 03:23:10.345203 kubelet[3995]: I1216 03:23:10.344619 3995 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 03:23:10.345203 kubelet[3995]: I1216 03:23:10.344626 3995 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 03:23:10.345203 kubelet[3995]: E1216 03:23:10.344660 3995 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 03:23:10.346833 kubelet[3995]: I1216 03:23:10.346320 3995 factory.go:223] Registration of the systemd container factory successfully Dec 16 03:23:10.346833 kubelet[3995]: I1216 03:23:10.346397 3995 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 03:23:10.348680 kubelet[3995]: I1216 03:23:10.348660 3995 factory.go:223] Registration of the containerd container factory successfully Dec 16 03:23:10.387348 kubelet[3995]: I1216 03:23:10.387330 3995 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 03:23:10.387425 kubelet[3995]: I1216 03:23:10.387419 3995 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 03:23:10.387471 kubelet[3995]: I1216 03:23:10.387465 3995 state_mem.go:36] "Initialized new in-memory state store" Dec 16 03:23:10.387586 kubelet[3995]: I1216 03:23:10.387580 3995 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 03:23:10.387618 kubelet[3995]: I1216 03:23:10.387607 3995 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 03:23:10.387649 kubelet[3995]: I1216 03:23:10.387646 3995 policy_none.go:49] "None policy: Start" Dec 16 03:23:10.387678 kubelet[3995]: I1216 03:23:10.387674 3995 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 03:23:10.387703 kubelet[3995]: I1216 03:23:10.387700 3995 state_mem.go:35] "Initializing new in-memory state store" Dec 16 03:23:10.387795 kubelet[3995]: I1216 03:23:10.387789 3995 state_mem.go:75] "Updated machine memory state" Dec 16 03:23:10.390884 kubelet[3995]: E1216 03:23:10.390871 3995 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 03:23:10.391335 kubelet[3995]: I1216 03:23:10.391316 3995 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 03:23:10.391448 kubelet[3995]: I1216 03:23:10.391420 3995 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 03:23:10.391701 kubelet[3995]: I1216 03:23:10.391693 3995 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 03:23:10.394633 kubelet[3995]: E1216 03:23:10.394619 3995 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 03:23:10.445557 kubelet[3995]: I1216 03:23:10.445518 3995 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:23:10.445809 kubelet[3995]: I1216 03:23:10.445753 3995 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:23:10.445809 kubelet[3995]: I1216 03:23:10.445527 3995 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:23:10.494345 kubelet[3995]: I1216 03:23:10.494325 3995 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:23:10.609363 kubelet[3995]: I1216 03:23:10.609338 3995 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 03:23:10.641539 kubelet[3995]: I1216 03:23:10.641467 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ccf1ff68e1c470f3b0d73aaccb2cddb6-ca-certs\") pod \"kube-apiserver-ci-4547.0.0-a-dc3ed46bb5\" (UID: \"ccf1ff68e1c470f3b0d73aaccb2cddb6\") " pod="kube-system/kube-apiserver-ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:23:10.641539 kubelet[3995]: I1216 03:23:10.641540 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ccf1ff68e1c470f3b0d73aaccb2cddb6-k8s-certs\") pod \"kube-apiserver-ci-4547.0.0-a-dc3ed46bb5\" (UID: \"ccf1ff68e1c470f3b0d73aaccb2cddb6\") " pod="kube-system/kube-apiserver-ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:23:10.641651 kubelet[3995]: I1216 03:23:10.641558 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/94b4c71a81f5345c25634add4f7491fd-flexvolume-dir\") pod \"kube-controller-manager-ci-4547.0.0-a-dc3ed46bb5\" (UID: \"94b4c71a81f5345c25634add4f7491fd\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:23:10.641651 kubelet[3995]: I1216 03:23:10.641577 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/94b4c71a81f5345c25634add4f7491fd-k8s-certs\") pod \"kube-controller-manager-ci-4547.0.0-a-dc3ed46bb5\" (UID: \"94b4c71a81f5345c25634add4f7491fd\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:23:10.641651 kubelet[3995]: I1216 03:23:10.641598 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/94b4c71a81f5345c25634add4f7491fd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547.0.0-a-dc3ed46bb5\" (UID: \"94b4c71a81f5345c25634add4f7491fd\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:23:10.641651 kubelet[3995]: I1216 03:23:10.641615 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ccf1ff68e1c470f3b0d73aaccb2cddb6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547.0.0-a-dc3ed46bb5\" (UID: \"ccf1ff68e1c470f3b0d73aaccb2cddb6\") " pod="kube-system/kube-apiserver-ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:23:10.641651 kubelet[3995]: I1216 03:23:10.641631 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/94b4c71a81f5345c25634add4f7491fd-ca-certs\") pod \"kube-controller-manager-ci-4547.0.0-a-dc3ed46bb5\" (UID: \"94b4c71a81f5345c25634add4f7491fd\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:23:10.641775 kubelet[3995]: I1216 03:23:10.641648 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/94b4c71a81f5345c25634add4f7491fd-kubeconfig\") pod \"kube-controller-manager-ci-4547.0.0-a-dc3ed46bb5\" (UID: \"94b4c71a81f5345c25634add4f7491fd\") " pod="kube-system/kube-controller-manager-ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:23:10.641775 kubelet[3995]: I1216 03:23:10.641665 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f08ea6db3ece4b36806738461eb12e77-kubeconfig\") pod \"kube-scheduler-ci-4547.0.0-a-dc3ed46bb5\" (UID: \"f08ea6db3ece4b36806738461eb12e77\") " pod="kube-system/kube-scheduler-ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:23:10.661910 kubelet[3995]: I1216 03:23:10.661514 3995 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 03:23:10.661910 kubelet[3995]: E1216 03:23:10.661728 3995 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547.0.0-a-dc3ed46bb5\" already exists" pod="kube-system/kube-scheduler-ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:23:10.663919 kubelet[3995]: I1216 03:23:10.663899 3995 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 03:23:10.664284 kubelet[3995]: E1216 03:23:10.664268 3995 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4547.0.0-a-dc3ed46bb5\" already exists" pod="kube-system/kube-controller-manager-ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:23:10.667873 kubelet[3995]: I1216 03:23:10.667837 3995 kubelet_node_status.go:124] "Node was previously registered" node="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:23:10.668123 kubelet[3995]: I1216 03:23:10.668060 3995 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:23:11.314468 kubelet[3995]: I1216 03:23:11.314434 3995 apiserver.go:52] "Watching apiserver" Dec 16 03:23:11.340928 kubelet[3995]: I1216 03:23:11.340902 3995 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 03:23:11.377043 kubelet[3995]: I1216 03:23:11.377016 3995 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:23:11.378027 kubelet[3995]: I1216 03:23:11.378004 3995 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:23:11.701453 kubelet[3995]: I1216 03:23:11.701379 3995 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 03:23:11.701453 kubelet[3995]: E1216 03:23:11.701425 3995 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547.0.0-a-dc3ed46bb5\" already exists" pod="kube-system/kube-apiserver-ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:23:11.701942 kubelet[3995]: I1216 03:23:11.701893 3995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4547.0.0-a-dc3ed46bb5" podStartSLOduration=1.7018792280000001 podStartE2EDuration="1.701879228s" podCreationTimestamp="2025-12-16 03:23:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:23:11.701486313 +0000 UTC m=+1.433828757" watchObservedRunningTime="2025-12-16 03:23:11.701879228 +0000 UTC m=+1.434221676" Dec 16 03:23:11.956694 kubelet[3995]: I1216 03:23:11.956602 3995 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 03:23:11.956694 kubelet[3995]: E1216 03:23:11.956661 3995 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547.0.0-a-dc3ed46bb5\" already exists" pod="kube-system/kube-scheduler-ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:23:13.368165 kubelet[3995]: I1216 03:23:13.367937 3995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4547.0.0-a-dc3ed46bb5" podStartSLOduration=6.367919631 podStartE2EDuration="6.367919631s" podCreationTimestamp="2025-12-16 03:23:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:23:12.561654818 +0000 UTC m=+2.293997269" watchObservedRunningTime="2025-12-16 03:23:13.367919631 +0000 UTC m=+3.100262082" Dec 16 03:23:14.878185 kubelet[3995]: I1216 03:23:14.878128 3995 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 03:23:14.878924 kubelet[3995]: I1216 03:23:14.878657 3995 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 03:23:14.878961 containerd[2508]: time="2025-12-16T03:23:14.878489346Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 03:23:18.666324 systemd[1]: Created slice kubepods-besteffort-pod06999734_a5e1_4ca2_aff8_1fc0f62c1409.slice - libcontainer container kubepods-besteffort-pod06999734_a5e1_4ca2_aff8_1fc0f62c1409.slice. Dec 16 03:23:18.692133 kubelet[3995]: I1216 03:23:18.692096 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/06999734-a5e1-4ca2-aff8-1fc0f62c1409-lib-modules\") pod \"kube-proxy-g99hn\" (UID: \"06999734-a5e1-4ca2-aff8-1fc0f62c1409\") " pod="kube-system/kube-proxy-g99hn" Dec 16 03:23:18.692133 kubelet[3995]: I1216 03:23:18.692127 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/06999734-a5e1-4ca2-aff8-1fc0f62c1409-xtables-lock\") pod \"kube-proxy-g99hn\" (UID: \"06999734-a5e1-4ca2-aff8-1fc0f62c1409\") " pod="kube-system/kube-proxy-g99hn" Dec 16 03:23:18.692458 kubelet[3995]: I1216 03:23:18.692157 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rwzk\" (UniqueName: \"kubernetes.io/projected/06999734-a5e1-4ca2-aff8-1fc0f62c1409-kube-api-access-6rwzk\") pod \"kube-proxy-g99hn\" (UID: \"06999734-a5e1-4ca2-aff8-1fc0f62c1409\") " pod="kube-system/kube-proxy-g99hn" Dec 16 03:23:18.692458 kubelet[3995]: I1216 03:23:18.692181 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/06999734-a5e1-4ca2-aff8-1fc0f62c1409-kube-proxy\") pod \"kube-proxy-g99hn\" (UID: \"06999734-a5e1-4ca2-aff8-1fc0f62c1409\") " pod="kube-system/kube-proxy-g99hn" Dec 16 03:23:18.977085 containerd[2508]: time="2025-12-16T03:23:18.976870580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g99hn,Uid:06999734-a5e1-4ca2-aff8-1fc0f62c1409,Namespace:kube-system,Attempt:0,}" Dec 16 03:23:19.226631 containerd[2508]: time="2025-12-16T03:23:19.226565316Z" level=info msg="connecting to shim cf9fabfb328f8913b0c4f46db43d37fc18558358ff73145cad545d2746eeb438" address="unix:///run/containerd/s/8938f2482d518e29aede7e8e79e18ede90a348f1eafc787c7237060250064e9d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:23:19.250328 systemd[1]: Started cri-containerd-cf9fabfb328f8913b0c4f46db43d37fc18558358ff73145cad545d2746eeb438.scope - libcontainer container cf9fabfb328f8913b0c4f46db43d37fc18558358ff73145cad545d2746eeb438. Dec 16 03:23:19.261885 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 16 03:23:19.261980 kernel: audit: type=1334 audit(1765855399.258:462): prog-id=157 op=LOAD Dec 16 03:23:19.258000 audit: BPF prog-id=157 op=LOAD Dec 16 03:23:19.268375 kernel: audit: type=1334 audit(1765855399.261:463): prog-id=158 op=LOAD Dec 16 03:23:19.268434 kernel: audit: type=1300 audit(1765855399.261:463): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=4050 pid=4061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.261000 audit: BPF prog-id=158 op=LOAD Dec 16 03:23:19.261000 audit[4061]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=4050 pid=4061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.273312 kernel: audit: type=1327 audit(1765855399.261:463): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366396661626662333238663839313362306334663436646234336433 Dec 16 03:23:19.261000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366396661626662333238663839313362306334663436646234336433 Dec 16 03:23:19.274801 kernel: audit: type=1334 audit(1765855399.261:464): prog-id=158 op=UNLOAD Dec 16 03:23:19.261000 audit: BPF prog-id=158 op=UNLOAD Dec 16 03:23:19.279309 kernel: audit: type=1300 audit(1765855399.261:464): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4050 pid=4061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.261000 audit[4061]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4050 pid=4061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.283892 kernel: audit: type=1327 audit(1765855399.261:464): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366396661626662333238663839313362306334663436646234336433 Dec 16 03:23:19.261000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366396661626662333238663839313362306334663436646234336433 Dec 16 03:23:19.285281 kernel: audit: type=1334 audit(1765855399.261:465): prog-id=159 op=LOAD Dec 16 03:23:19.261000 audit: BPF prog-id=159 op=LOAD Dec 16 03:23:19.261000 audit[4061]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=4050 pid=4061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.292177 kernel: audit: type=1300 audit(1765855399.261:465): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=4050 pid=4061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.261000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366396661626662333238663839313362306334663436646234336433 Dec 16 03:23:19.298160 kernel: audit: type=1327 audit(1765855399.261:465): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366396661626662333238663839313362306334663436646234336433 Dec 16 03:23:19.261000 audit: BPF prog-id=160 op=LOAD Dec 16 03:23:19.261000 audit[4061]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=4050 pid=4061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.261000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366396661626662333238663839313362306334663436646234336433 Dec 16 03:23:19.261000 audit: BPF prog-id=160 op=UNLOAD Dec 16 03:23:19.261000 audit[4061]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4050 pid=4061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.261000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366396661626662333238663839313362306334663436646234336433 Dec 16 03:23:19.261000 audit: BPF prog-id=159 op=UNLOAD Dec 16 03:23:19.261000 audit[4061]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4050 pid=4061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.261000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366396661626662333238663839313362306334663436646234336433 Dec 16 03:23:19.261000 audit: BPF prog-id=161 op=LOAD Dec 16 03:23:19.261000 audit[4061]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=4050 pid=4061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.261000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366396661626662333238663839313362306334663436646234336433 Dec 16 03:23:19.313239 containerd[2508]: time="2025-12-16T03:23:19.313167346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g99hn,Uid:06999734-a5e1-4ca2-aff8-1fc0f62c1409,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf9fabfb328f8913b0c4f46db43d37fc18558358ff73145cad545d2746eeb438\"" Dec 16 03:23:19.318419 systemd[1]: Created slice kubepods-besteffort-podc8f95e28_39d7_44e4_8175_f4f85f226755.slice - libcontainer container kubepods-besteffort-podc8f95e28_39d7_44e4_8175_f4f85f226755.slice. Dec 16 03:23:19.324409 containerd[2508]: time="2025-12-16T03:23:19.324384549Z" level=info msg="CreateContainer within sandbox \"cf9fabfb328f8913b0c4f46db43d37fc18558358ff73145cad545d2746eeb438\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 03:23:19.341243 containerd[2508]: time="2025-12-16T03:23:19.340744907Z" level=info msg="Container 6068d903f166c6c3d19af8237f89d75b2c228304d0fb955ccb3be85c94c8379e: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:23:19.352615 containerd[2508]: time="2025-12-16T03:23:19.352589278Z" level=info msg="CreateContainer within sandbox \"cf9fabfb328f8913b0c4f46db43d37fc18558358ff73145cad545d2746eeb438\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6068d903f166c6c3d19af8237f89d75b2c228304d0fb955ccb3be85c94c8379e\"" Dec 16 03:23:19.353015 containerd[2508]: time="2025-12-16T03:23:19.352991319Z" level=info msg="StartContainer for \"6068d903f166c6c3d19af8237f89d75b2c228304d0fb955ccb3be85c94c8379e\"" Dec 16 03:23:19.354034 containerd[2508]: time="2025-12-16T03:23:19.354010386Z" level=info msg="connecting to shim 6068d903f166c6c3d19af8237f89d75b2c228304d0fb955ccb3be85c94c8379e" address="unix:///run/containerd/s/8938f2482d518e29aede7e8e79e18ede90a348f1eafc787c7237060250064e9d" protocol=ttrpc version=3 Dec 16 03:23:19.368307 systemd[1]: Started cri-containerd-6068d903f166c6c3d19af8237f89d75b2c228304d0fb955ccb3be85c94c8379e.scope - libcontainer container 6068d903f166c6c3d19af8237f89d75b2c228304d0fb955ccb3be85c94c8379e. Dec 16 03:23:19.398048 kubelet[3995]: I1216 03:23:19.397973 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c8f95e28-39d7-44e4-8175-f4f85f226755-var-lib-calico\") pod \"tigera-operator-7dcd859c48-gwbtj\" (UID: \"c8f95e28-39d7-44e4-8175-f4f85f226755\") " pod="tigera-operator/tigera-operator-7dcd859c48-gwbtj" Dec 16 03:23:19.398048 kubelet[3995]: I1216 03:23:19.398007 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnqbl\" (UniqueName: \"kubernetes.io/projected/c8f95e28-39d7-44e4-8175-f4f85f226755-kube-api-access-bnqbl\") pod \"tigera-operator-7dcd859c48-gwbtj\" (UID: \"c8f95e28-39d7-44e4-8175-f4f85f226755\") " pod="tigera-operator/tigera-operator-7dcd859c48-gwbtj" Dec 16 03:23:19.414000 audit: BPF prog-id=162 op=LOAD Dec 16 03:23:19.414000 audit[4087]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4050 pid=4087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630363864393033663136366336633364313961663832333766383964 Dec 16 03:23:19.414000 audit: BPF prog-id=163 op=LOAD Dec 16 03:23:19.414000 audit[4087]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4050 pid=4087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630363864393033663136366336633364313961663832333766383964 Dec 16 03:23:19.414000 audit: BPF prog-id=163 op=UNLOAD Dec 16 03:23:19.414000 audit[4087]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4050 pid=4087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630363864393033663136366336633364313961663832333766383964 Dec 16 03:23:19.414000 audit: BPF prog-id=162 op=UNLOAD Dec 16 03:23:19.414000 audit[4087]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4050 pid=4087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630363864393033663136366336633364313961663832333766383964 Dec 16 03:23:19.414000 audit: BPF prog-id=164 op=LOAD Dec 16 03:23:19.414000 audit[4087]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4050 pid=4087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3630363864393033663136366336633364313961663832333766383964 Dec 16 03:23:19.435221 containerd[2508]: time="2025-12-16T03:23:19.435192941Z" level=info msg="StartContainer for \"6068d903f166c6c3d19af8237f89d75b2c228304d0fb955ccb3be85c94c8379e\" returns successfully" Dec 16 03:23:19.696000 audit[4154]: NETFILTER_CFG table=mangle:57 family=10 entries=1 op=nft_register_chain pid=4154 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:23:19.696000 audit[4154]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc4060d860 a2=0 a3=7ffc4060d84c items=0 ppid=4100 pid=4154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.696000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 03:23:19.697000 audit[4155]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=4155 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:23:19.697000 audit[4155]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff2c7abf70 a2=0 a3=7fff2c7abf5c items=0 ppid=4100 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.697000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 03:23:19.701000 audit[4159]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=4159 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:23:19.701000 audit[4159]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd76c48aa0 a2=0 a3=7ffd76c48a8c items=0 ppid=4100 pid=4159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.701000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 03:23:19.702000 audit[4160]: NETFILTER_CFG table=mangle:60 family=2 entries=1 op=nft_register_chain pid=4160 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:23:19.702000 audit[4160]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe311812e0 a2=0 a3=7ffe311812cc items=0 ppid=4100 pid=4160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.702000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 03:23:19.706000 audit[4161]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=4161 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:23:19.706000 audit[4161]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffbb24f6b0 a2=0 a3=7fffbb24f69c items=0 ppid=4100 pid=4161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.706000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 03:23:19.707000 audit[4162]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_chain pid=4162 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:23:19.707000 audit[4162]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeda9b8120 a2=0 a3=7ffeda9b810c items=0 ppid=4100 pid=4162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.707000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 03:23:19.802000 audit[4163]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=4163 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:23:19.802000 audit[4163]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe4747ce60 a2=0 a3=7ffe4747ce4c items=0 ppid=4100 pid=4163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.802000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 03:23:19.804000 audit[4165]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=4165 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:23:19.804000 audit[4165]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe95690710 a2=0 a3=7ffe956906fc items=0 ppid=4100 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.804000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 16 03:23:19.808000 audit[4168]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_rule pid=4168 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:23:19.808000 audit[4168]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc28546520 a2=0 a3=7ffc2854650c items=0 ppid=4100 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.808000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 16 03:23:19.809000 audit[4169]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_chain pid=4169 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:23:19.809000 audit[4169]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffde120f930 a2=0 a3=7ffde120f91c items=0 ppid=4100 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.809000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 03:23:19.811000 audit[4171]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=4171 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:23:19.811000 audit[4171]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd3878a1e0 a2=0 a3=7ffd3878a1cc items=0 ppid=4100 pid=4171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.811000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 03:23:19.812000 audit[4172]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=4172 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:23:19.812000 audit[4172]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe5530f240 a2=0 a3=7ffe5530f22c items=0 ppid=4100 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.812000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 03:23:19.814000 audit[4174]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=4174 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:23:19.814000 audit[4174]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffee250c1e0 a2=0 a3=7ffee250c1cc items=0 ppid=4100 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.814000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 03:23:19.818000 audit[4177]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_rule pid=4177 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:23:19.818000 audit[4177]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe64dcddc0 a2=0 a3=7ffe64dcddac items=0 ppid=4100 pid=4177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.818000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 16 03:23:19.819000 audit[4178]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_chain pid=4178 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:23:19.819000 audit[4178]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffefa11bf80 a2=0 a3=7ffefa11bf6c items=0 ppid=4100 pid=4178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.819000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 03:23:19.821000 audit[4180]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=4180 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:23:19.821000 audit[4180]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffaf578070 a2=0 a3=7fffaf57805c items=0 ppid=4100 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.821000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 03:23:19.822000 audit[4181]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=4181 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:23:19.822000 audit[4181]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe4b9184a0 a2=0 a3=7ffe4b91848c items=0 ppid=4100 pid=4181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.822000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 03:23:19.825000 audit[4183]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=4183 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:23:19.825000 audit[4183]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd96497ee0 a2=0 a3=7ffd96497ecc items=0 ppid=4100 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.825000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 03:23:19.828000 audit[4186]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_rule pid=4186 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:23:19.828000 audit[4186]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd63df9480 a2=0 a3=7ffd63df946c items=0 ppid=4100 pid=4186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.828000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 03:23:19.831000 audit[4189]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=4189 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:23:19.831000 audit[4189]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffa2614040 a2=0 a3=7fffa261402c items=0 ppid=4100 pid=4189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.831000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 03:23:19.832000 audit[4190]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=4190 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:23:19.832000 audit[4190]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcb6958d60 a2=0 a3=7ffcb6958d4c items=0 ppid=4100 pid=4190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.832000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 03:23:19.835000 audit[4192]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=4192 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:23:19.835000 audit[4192]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc25f1f0a0 a2=0 a3=7ffc25f1f08c items=0 ppid=4100 pid=4192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.835000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 03:23:19.838000 audit[4195]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_rule pid=4195 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:23:19.838000 audit[4195]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff1fa059c0 a2=0 a3=7fff1fa059ac items=0 ppid=4100 pid=4195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.838000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 03:23:19.839000 audit[4196]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_chain pid=4196 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:23:19.839000 audit[4196]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe458300a0 a2=0 a3=7ffe4583008c items=0 ppid=4100 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.839000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 03:23:19.841000 audit[4198]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=4198 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:23:19.841000 audit[4198]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffee5068bb0 a2=0 a3=7ffee5068b9c items=0 ppid=4100 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.841000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 03:23:19.911000 audit[4204]: NETFILTER_CFG table=filter:82 family=2 entries=8 op=nft_register_rule pid=4204 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:23:19.911000 audit[4204]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd479c68e0 a2=0 a3=7ffd479c68cc items=0 ppid=4100 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.911000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:23:19.924306 containerd[2508]: time="2025-12-16T03:23:19.924115441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-gwbtj,Uid:c8f95e28-39d7-44e4-8175-f4f85f226755,Namespace:tigera-operator,Attempt:0,}" Dec 16 03:23:19.939000 audit[4204]: NETFILTER_CFG table=nat:83 family=2 entries=14 op=nft_register_chain pid=4204 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:23:19.939000 audit[4204]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd479c68e0 a2=0 a3=7ffd479c68cc items=0 ppid=4100 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.939000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:23:19.940000 audit[4209]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=4209 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:23:19.940000 audit[4209]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc71317340 a2=0 a3=7ffc7131732c items=0 ppid=4100 pid=4209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.940000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 03:23:19.943000 audit[4211]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=4211 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:23:19.943000 audit[4211]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fffc37ed4a0 a2=0 a3=7fffc37ed48c items=0 ppid=4100 pid=4211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.943000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 16 03:23:19.946000 audit[4214]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=4214 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:23:19.946000 audit[4214]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff21fe4100 a2=0 a3=7fff21fe40ec items=0 ppid=4100 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.946000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 16 03:23:19.950000 audit[4215]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=4215 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:23:19.950000 audit[4215]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd1d3dfce0 a2=0 a3=7ffd1d3dfccc items=0 ppid=4100 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.950000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 03:23:19.957000 audit[4221]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=4221 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:23:19.957000 audit[4221]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd37d91d40 a2=0 a3=7ffd37d91d2c items=0 ppid=4100 pid=4221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.957000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 03:23:19.961382 containerd[2508]: time="2025-12-16T03:23:19.961168796Z" level=info msg="connecting to shim 4063cb19e66c74bb51f2980ece8e4a0cdf8feff2c8f8810b8ce8ff263cfb709a" address="unix:///run/containerd/s/40cabaa6dee260dfd3f0ff396a8450406b275dda1b4941f703bad3e03de7e14a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:23:19.960000 audit[4228]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=4228 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:23:19.960000 audit[4228]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc9b01a0f0 a2=0 a3=7ffc9b01a0dc items=0 ppid=4100 pid=4228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.960000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 03:23:19.965000 audit[4236]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=4236 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:23:19.965000 audit[4236]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffff3a50ec0 a2=0 a3=7ffff3a50eac items=0 ppid=4100 pid=4236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.965000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 16 03:23:19.972000 audit[4243]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=4243 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:23:19.972000 audit[4243]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffe27c9e170 a2=0 a3=7ffe27c9e15c items=0 ppid=4100 pid=4243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.972000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 03:23:19.973000 audit[4244]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=4244 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:23:19.973000 audit[4244]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd6ccaf50 a2=0 a3=7ffcd6ccaf3c items=0 ppid=4100 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.973000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 03:23:19.976000 audit[4251]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=4251 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:23:19.976000 audit[4251]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe81f4ab00 a2=0 a3=7ffe81f4aaec items=0 ppid=4100 pid=4251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.976000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 03:23:19.977000 audit[4252]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=4252 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:23:19.977000 audit[4252]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffda0200910 a2=0 a3=7ffda02008fc items=0 ppid=4100 pid=4252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.977000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 03:23:19.981000 audit[4257]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=4257 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:23:19.981000 audit[4257]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe8ebd4140 a2=0 a3=7ffe8ebd412c items=0 ppid=4100 pid=4257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.981000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 03:23:19.987000 audit[4262]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=4262 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:23:19.987000 audit[4262]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd3449de50 a2=0 a3=7ffd3449de3c items=0 ppid=4100 pid=4262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.987000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 03:23:19.990357 systemd[1]: Started cri-containerd-4063cb19e66c74bb51f2980ece8e4a0cdf8feff2c8f8810b8ce8ff263cfb709a.scope - libcontainer container 4063cb19e66c74bb51f2980ece8e4a0cdf8feff2c8f8810b8ce8ff263cfb709a. Dec 16 03:23:19.991000 audit[4266]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=4266 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:23:19.991000 audit[4266]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcab49d3f0 a2=0 a3=7ffcab49d3dc items=0 ppid=4100 pid=4266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.991000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 16 03:23:19.994000 audit[4272]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=4272 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:23:19.994000 audit[4272]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe4d82e310 a2=0 a3=7ffe4d82e2fc items=0 ppid=4100 pid=4272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.994000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 03:23:19.997000 audit[4275]: NETFILTER_CFG table=nat:99 family=10 entries=1 op=nft_register_rule pid=4275 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:23:19.997000 audit[4275]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff846f0ac0 a2=0 a3=7fff846f0aac items=0 ppid=4100 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:19.997000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 03:23:20.002000 audit: BPF prog-id=165 op=LOAD Dec 16 03:23:20.002000 audit: BPF prog-id=166 op=LOAD Dec 16 03:23:20.002000 audit[4241]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4226 pid=4241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:20.002000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430363363623139653636633734626235316632393830656365386534 Dec 16 03:23:20.003000 audit: BPF prog-id=166 op=UNLOAD Dec 16 03:23:20.003000 audit[4241]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4226 pid=4241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:20.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430363363623139653636633734626235316632393830656365386534 Dec 16 03:23:20.003000 audit: BPF prog-id=167 op=LOAD Dec 16 03:23:20.003000 audit[4241]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4226 pid=4241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:20.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430363363623139653636633734626235316632393830656365386534 Dec 16 03:23:20.003000 audit: BPF prog-id=168 op=LOAD Dec 16 03:23:20.003000 audit[4241]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4226 pid=4241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:20.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430363363623139653636633734626235316632393830656365386534 Dec 16 03:23:20.003000 audit: BPF prog-id=168 op=UNLOAD Dec 16 03:23:20.003000 audit[4241]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4226 pid=4241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:20.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430363363623139653636633734626235316632393830656365386534 Dec 16 03:23:20.003000 audit: BPF prog-id=167 op=UNLOAD Dec 16 03:23:20.003000 audit[4241]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4226 pid=4241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:20.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430363363623139653636633734626235316632393830656365386534 Dec 16 03:23:20.003000 audit: BPF prog-id=169 op=LOAD Dec 16 03:23:20.003000 audit[4241]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4226 pid=4241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:20.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430363363623139653636633734626235316632393830656365386534 Dec 16 03:23:20.004000 audit[4278]: NETFILTER_CFG table=nat:100 family=10 entries=1 op=nft_register_rule pid=4278 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:23:20.004000 audit[4278]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff239c3120 a2=0 a3=7fff239c310c items=0 ppid=4100 pid=4278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:20.004000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 03:23:20.006000 audit[4279]: NETFILTER_CFG table=nat:101 family=10 entries=1 op=nft_register_chain pid=4279 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:23:20.006000 audit[4279]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffec270fd60 a2=0 a3=7ffec270fd4c items=0 ppid=4100 pid=4279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:20.006000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 03:23:20.008000 audit[4281]: NETFILTER_CFG table=nat:102 family=10 entries=2 op=nft_register_chain pid=4281 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:23:20.008000 audit[4281]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffc76a28f00 a2=0 a3=7ffc76a28eec items=0 ppid=4100 pid=4281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:20.008000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 03:23:20.009000 audit[4282]: NETFILTER_CFG table=filter:103 family=10 entries=1 op=nft_register_chain pid=4282 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:23:20.009000 audit[4282]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd388d5670 a2=0 a3=7ffd388d565c items=0 ppid=4100 pid=4282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:20.009000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 03:23:20.012000 audit[4284]: NETFILTER_CFG table=filter:104 family=10 entries=1 op=nft_register_rule pid=4284 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:23:20.012000 audit[4284]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe432b2370 a2=0 a3=7ffe432b235c items=0 ppid=4100 pid=4284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:20.012000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 03:23:20.016000 audit[4287]: NETFILTER_CFG table=filter:105 family=10 entries=1 op=nft_register_rule pid=4287 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:23:20.016000 audit[4287]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd6427dfa0 a2=0 a3=7ffd6427df8c items=0 ppid=4100 pid=4287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:20.016000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 03:23:20.023000 audit[4289]: NETFILTER_CFG table=filter:106 family=10 entries=3 op=nft_register_rule pid=4289 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 03:23:20.023000 audit[4289]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffcfc214c70 a2=0 a3=7ffcfc214c5c items=0 ppid=4100 pid=4289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:20.023000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:23:20.024000 audit[4289]: NETFILTER_CFG table=nat:107 family=10 entries=7 op=nft_register_chain pid=4289 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 03:23:20.024000 audit[4289]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffcfc214c70 a2=0 a3=7ffcfc214c5c items=0 ppid=4100 pid=4289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:20.024000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:23:20.042422 containerd[2508]: time="2025-12-16T03:23:20.042400077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-gwbtj,Uid:c8f95e28-39d7-44e4-8175-f4f85f226755,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4063cb19e66c74bb51f2980ece8e4a0cdf8feff2c8f8810b8ce8ff263cfb709a\"" Dec 16 03:23:20.043919 containerd[2508]: time="2025-12-16T03:23:20.043891566Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 03:23:20.549757 kubelet[3995]: I1216 03:23:20.549694 3995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g99hn" podStartSLOduration=4.549648145 podStartE2EDuration="4.549648145s" podCreationTimestamp="2025-12-16 03:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:23:20.549550476 +0000 UTC m=+10.281892927" watchObservedRunningTime="2025-12-16 03:23:20.549648145 +0000 UTC m=+10.281990596" Dec 16 03:23:21.995538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3549645111.mount: Deactivated successfully. Dec 16 03:23:22.534497 containerd[2508]: time="2025-12-16T03:23:22.534455587Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:23:22.536928 containerd[2508]: time="2025-12-16T03:23:22.536830162Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=0" Dec 16 03:23:22.539197 containerd[2508]: time="2025-12-16T03:23:22.539172354Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:23:22.543100 containerd[2508]: time="2025-12-16T03:23:22.543056031Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:23:22.543838 containerd[2508]: time="2025-12-16T03:23:22.543492698Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.499488884s" Dec 16 03:23:22.543838 containerd[2508]: time="2025-12-16T03:23:22.543521505Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 16 03:23:22.549119 containerd[2508]: time="2025-12-16T03:23:22.549090875Z" level=info msg="CreateContainer within sandbox \"4063cb19e66c74bb51f2980ece8e4a0cdf8feff2c8f8810b8ce8ff263cfb709a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 03:23:22.565415 containerd[2508]: time="2025-12-16T03:23:22.564349603Z" level=info msg="Container e68f977e265e8d76e34f575a899e78d3795b0b4add524c38d659f601dd195ddf: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:23:22.568765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1905757208.mount: Deactivated successfully. Dec 16 03:23:22.578911 containerd[2508]: time="2025-12-16T03:23:22.578884528Z" level=info msg="CreateContainer within sandbox \"4063cb19e66c74bb51f2980ece8e4a0cdf8feff2c8f8810b8ce8ff263cfb709a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e68f977e265e8d76e34f575a899e78d3795b0b4add524c38d659f601dd195ddf\"" Dec 16 03:23:22.579368 containerd[2508]: time="2025-12-16T03:23:22.579309063Z" level=info msg="StartContainer for \"e68f977e265e8d76e34f575a899e78d3795b0b4add524c38d659f601dd195ddf\"" Dec 16 03:23:22.581042 containerd[2508]: time="2025-12-16T03:23:22.581004311Z" level=info msg="connecting to shim e68f977e265e8d76e34f575a899e78d3795b0b4add524c38d659f601dd195ddf" address="unix:///run/containerd/s/40cabaa6dee260dfd3f0ff396a8450406b275dda1b4941f703bad3e03de7e14a" protocol=ttrpc version=3 Dec 16 03:23:22.604336 systemd[1]: Started cri-containerd-e68f977e265e8d76e34f575a899e78d3795b0b4add524c38d659f601dd195ddf.scope - libcontainer container e68f977e265e8d76e34f575a899e78d3795b0b4add524c38d659f601dd195ddf. Dec 16 03:23:22.612000 audit: BPF prog-id=170 op=LOAD Dec 16 03:23:22.612000 audit: BPF prog-id=171 op=LOAD Dec 16 03:23:22.612000 audit[4306]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228238 a2=98 a3=0 items=0 ppid=4226 pid=4306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:22.612000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536386639373765323635653864373665333466353735613839396537 Dec 16 03:23:22.612000 audit: BPF prog-id=171 op=UNLOAD Dec 16 03:23:22.612000 audit[4306]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4226 pid=4306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:22.612000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536386639373765323635653864373665333466353735613839396537 Dec 16 03:23:22.612000 audit: BPF prog-id=172 op=LOAD Dec 16 03:23:22.612000 audit[4306]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000228488 a2=98 a3=0 items=0 ppid=4226 pid=4306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:22.612000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536386639373765323635653864373665333466353735613839396537 Dec 16 03:23:22.612000 audit: BPF prog-id=173 op=LOAD Dec 16 03:23:22.612000 audit[4306]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000228218 a2=98 a3=0 items=0 ppid=4226 pid=4306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:22.612000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536386639373765323635653864373665333466353735613839396537 Dec 16 03:23:22.612000 audit: BPF prog-id=173 op=UNLOAD Dec 16 03:23:22.612000 audit[4306]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4226 pid=4306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:22.612000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536386639373765323635653864373665333466353735613839396537 Dec 16 03:23:22.613000 audit: BPF prog-id=172 op=UNLOAD Dec 16 03:23:22.613000 audit[4306]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4226 pid=4306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:22.613000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536386639373765323635653864373665333466353735613839396537 Dec 16 03:23:22.613000 audit: BPF prog-id=174 op=LOAD Dec 16 03:23:22.613000 audit[4306]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002286e8 a2=98 a3=0 items=0 ppid=4226 pid=4306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:22.613000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536386639373765323635653864373665333466353735613839396537 Dec 16 03:23:22.633045 containerd[2508]: time="2025-12-16T03:23:22.633022034Z" level=info msg="StartContainer for \"e68f977e265e8d76e34f575a899e78d3795b0b4add524c38d659f601dd195ddf\" returns successfully" Dec 16 03:23:35.136208 sudo[3001]: pam_unix(sudo:session): session closed for user root Dec 16 03:23:35.146042 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 16 03:23:35.146209 kernel: audit: type=1106 audit(1765855415.135:542): pid=3001 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:23:35.135000 audit[3001]: USER_END pid=3001 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:23:35.145000 audit[3001]: CRED_DISP pid=3001 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:23:35.154158 kernel: audit: type=1104 audit(1765855415.145:543): pid=3001 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:23:35.245766 sshd[3000]: Connection closed by 10.200.16.10 port 58150 Dec 16 03:23:35.246032 sshd-session[2996]: pam_unix(sshd:session): session closed for user core Dec 16 03:23:35.246000 audit[2996]: USER_END pid=2996 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:23:35.249960 systemd[1]: sshd@6-10.200.8.23:22-10.200.16.10:58150.service: Deactivated successfully. Dec 16 03:23:35.252598 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 03:23:35.252803 systemd[1]: session-10.scope: Consumed 3.598s CPU time, 229M memory peak. Dec 16 03:23:35.257273 kernel: audit: type=1106 audit(1765855415.246:544): pid=2996 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:23:35.257331 kernel: audit: type=1104 audit(1765855415.246:545): pid=2996 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:23:35.246000 audit[2996]: CRED_DISP pid=2996 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:23:35.256876 systemd-logind[2488]: Session 10 logged out. Waiting for processes to exit. Dec 16 03:23:35.258740 systemd-logind[2488]: Removed session 10. Dec 16 03:23:35.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.23:22-10.200.16.10:58150 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:23:35.266857 kernel: audit: type=1131 audit(1765855415.249:546): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.23:22-10.200.16.10:58150 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:23:39.498000 audit[4389]: NETFILTER_CFG table=filter:108 family=2 entries=15 op=nft_register_rule pid=4389 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:23:39.505158 kernel: audit: type=1325 audit(1765855419.498:547): table=filter:108 family=2 entries=15 op=nft_register_rule pid=4389 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:23:39.498000 audit[4389]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff27d8ecb0 a2=0 a3=7fff27d8ec9c items=0 ppid=4100 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:39.498000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:23:39.520158 kernel: audit: type=1300 audit(1765855419.498:547): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff27d8ecb0 a2=0 a3=7fff27d8ec9c items=0 ppid=4100 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:39.520230 kernel: audit: type=1327 audit(1765855419.498:547): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:23:39.521000 audit[4389]: NETFILTER_CFG table=nat:109 family=2 entries=12 op=nft_register_rule pid=4389 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:23:39.529154 kernel: audit: type=1325 audit(1765855419.521:548): table=nat:109 family=2 entries=12 op=nft_register_rule pid=4389 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:23:39.521000 audit[4389]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff27d8ecb0 a2=0 a3=0 items=0 ppid=4100 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:39.541156 kernel: audit: type=1300 audit(1765855419.521:548): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff27d8ecb0 a2=0 a3=0 items=0 ppid=4100 pid=4389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:39.521000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:23:39.546000 audit[4391]: NETFILTER_CFG table=filter:110 family=2 entries=16 op=nft_register_rule pid=4391 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:23:39.546000 audit[4391]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcb62dbb70 a2=0 a3=7ffcb62dbb5c items=0 ppid=4100 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:39.546000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:23:39.549000 audit[4391]: NETFILTER_CFG table=nat:111 family=2 entries=12 op=nft_register_rule pid=4391 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:23:39.549000 audit[4391]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcb62dbb70 a2=0 a3=0 items=0 ppid=4100 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:39.549000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:23:46.686089 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 03:23:46.686233 kernel: audit: type=1325 audit(1765855426.678:551): table=filter:112 family=2 entries=17 op=nft_register_rule pid=4393 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:23:46.678000 audit[4393]: NETFILTER_CFG table=filter:112 family=2 entries=17 op=nft_register_rule pid=4393 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:23:46.678000 audit[4393]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd0417e6c0 a2=0 a3=7ffd0417e6ac items=0 ppid=4100 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:46.695124 kernel: audit: type=1300 audit(1765855426.678:551): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd0417e6c0 a2=0 a3=7ffd0417e6ac items=0 ppid=4100 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:46.695215 kernel: audit: type=1327 audit(1765855426.678:551): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:23:46.695235 kernel: audit: type=1325 audit(1765855426.692:552): table=nat:113 family=2 entries=12 op=nft_register_rule pid=4393 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:23:46.678000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:23:46.692000 audit[4393]: NETFILTER_CFG table=nat:113 family=2 entries=12 op=nft_register_rule pid=4393 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:23:46.692000 audit[4393]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd0417e6c0 a2=0 a3=0 items=0 ppid=4100 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:46.704698 kernel: audit: type=1300 audit(1765855426.692:552): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd0417e6c0 a2=0 a3=0 items=0 ppid=4100 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:46.692000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:23:46.708684 kernel: audit: type=1327 audit(1765855426.692:552): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:23:46.969000 audit[4395]: NETFILTER_CFG table=filter:114 family=2 entries=18 op=nft_register_rule pid=4395 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:23:46.969000 audit[4395]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc340b9fc0 a2=0 a3=7ffc340b9fac items=0 ppid=4100 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:46.979316 kernel: audit: type=1325 audit(1765855426.969:553): table=filter:114 family=2 entries=18 op=nft_register_rule pid=4395 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:23:46.979361 kernel: audit: type=1300 audit(1765855426.969:553): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc340b9fc0 a2=0 a3=7ffc340b9fac items=0 ppid=4100 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:46.969000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:23:46.983635 kernel: audit: type=1327 audit(1765855426.969:553): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:23:46.979000 audit[4395]: NETFILTER_CFG table=nat:115 family=2 entries=12 op=nft_register_rule pid=4395 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:23:46.986887 kernel: audit: type=1325 audit(1765855426.979:554): table=nat:115 family=2 entries=12 op=nft_register_rule pid=4395 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:23:46.979000 audit[4395]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc340b9fc0 a2=0 a3=0 items=0 ppid=4100 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:46.979000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:23:47.992000 audit[4397]: NETFILTER_CFG table=filter:116 family=2 entries=19 op=nft_register_rule pid=4397 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:23:47.992000 audit[4397]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd5ec702b0 a2=0 a3=7ffd5ec7029c items=0 ppid=4100 pid=4397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:47.992000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:23:47.999000 audit[4397]: NETFILTER_CFG table=nat:117 family=2 entries=12 op=nft_register_rule pid=4397 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:23:47.999000 audit[4397]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd5ec702b0 a2=0 a3=0 items=0 ppid=4100 pid=4397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:47.999000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:23:50.936000 audit[4402]: NETFILTER_CFG table=filter:118 family=2 entries=21 op=nft_register_rule pid=4402 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:23:50.936000 audit[4402]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe8bd4a820 a2=0 a3=7ffe8bd4a80c items=0 ppid=4100 pid=4402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:50.936000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:23:50.941000 audit[4402]: NETFILTER_CFG table=nat:119 family=2 entries=12 op=nft_register_rule pid=4402 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:23:50.941000 audit[4402]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe8bd4a820 a2=0 a3=0 items=0 ppid=4100 pid=4402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:50.941000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:23:50.957000 audit[4404]: NETFILTER_CFG table=filter:120 family=2 entries=22 op=nft_register_rule pid=4404 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:23:50.957000 audit[4404]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffcceaf0010 a2=0 a3=7ffcceaefffc items=0 ppid=4100 pid=4404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:50.957000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:23:50.961000 audit[4404]: NETFILTER_CFG table=nat:121 family=2 entries=12 op=nft_register_rule pid=4404 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:23:50.961000 audit[4404]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcceaf0010 a2=0 a3=0 items=0 ppid=4100 pid=4404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:50.961000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:23:51.156034 kubelet[3995]: I1216 03:23:51.155108 3995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-gwbtj" podStartSLOduration=31.654583787 podStartE2EDuration="34.155091092s" podCreationTimestamp="2025-12-16 03:23:17 +0000 UTC" firstStartedPulling="2025-12-16 03:23:20.043601753 +0000 UTC m=+9.775944205" lastFinishedPulling="2025-12-16 03:23:22.544109059 +0000 UTC m=+12.276451510" observedRunningTime="2025-12-16 03:23:23.604486257 +0000 UTC m=+13.336828707" watchObservedRunningTime="2025-12-16 03:23:51.155091092 +0000 UTC m=+40.887433542" Dec 16 03:23:51.173400 systemd[1]: Created slice kubepods-besteffort-pod7fafc5f3_4100_42b4_8c8a_92d1409a19d6.slice - libcontainer container kubepods-besteffort-pod7fafc5f3_4100_42b4_8c8a_92d1409a19d6.slice. Dec 16 03:23:51.189501 kubelet[3995]: I1216 03:23:51.189367 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7fafc5f3-4100-42b4-8c8a-92d1409a19d6-tigera-ca-bundle\") pod \"calico-typha-f4f4b55c-rzxr9\" (UID: \"7fafc5f3-4100-42b4-8c8a-92d1409a19d6\") " pod="calico-system/calico-typha-f4f4b55c-rzxr9" Dec 16 03:23:51.189501 kubelet[3995]: I1216 03:23:51.189415 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7fafc5f3-4100-42b4-8c8a-92d1409a19d6-typha-certs\") pod \"calico-typha-f4f4b55c-rzxr9\" (UID: \"7fafc5f3-4100-42b4-8c8a-92d1409a19d6\") " pod="calico-system/calico-typha-f4f4b55c-rzxr9" Dec 16 03:23:51.189501 kubelet[3995]: I1216 03:23:51.189435 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s95n\" (UniqueName: \"kubernetes.io/projected/7fafc5f3-4100-42b4-8c8a-92d1409a19d6-kube-api-access-4s95n\") pod \"calico-typha-f4f4b55c-rzxr9\" (UID: \"7fafc5f3-4100-42b4-8c8a-92d1409a19d6\") " pod="calico-system/calico-typha-f4f4b55c-rzxr9" Dec 16 03:23:51.482632 containerd[2508]: time="2025-12-16T03:23:51.482525543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f4f4b55c-rzxr9,Uid:7fafc5f3-4100-42b4-8c8a-92d1409a19d6,Namespace:calico-system,Attempt:0,}" Dec 16 03:23:51.521453 containerd[2508]: time="2025-12-16T03:23:51.521414636Z" level=info msg="connecting to shim da7f1c691a2cb937bfe44ff7529de80fd6c26821f12128f920649086e094e27f" address="unix:///run/containerd/s/dd4c04c21b51146c412ce398c14fd9a2ca4395ca2403a161b2f762433462f427" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:23:51.543307 systemd[1]: Started cri-containerd-da7f1c691a2cb937bfe44ff7529de80fd6c26821f12128f920649086e094e27f.scope - libcontainer container da7f1c691a2cb937bfe44ff7529de80fd6c26821f12128f920649086e094e27f. Dec 16 03:23:51.550000 audit: BPF prog-id=175 op=LOAD Dec 16 03:23:51.550000 audit: BPF prog-id=176 op=LOAD Dec 16 03:23:51.550000 audit[4426]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4415 pid=4426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:51.550000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461376631633639316132636239333762666534346666373532396465 Dec 16 03:23:51.550000 audit: BPF prog-id=176 op=UNLOAD Dec 16 03:23:51.550000 audit[4426]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4415 pid=4426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:51.550000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461376631633639316132636239333762666534346666373532396465 Dec 16 03:23:51.551000 audit: BPF prog-id=177 op=LOAD Dec 16 03:23:51.551000 audit[4426]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4415 pid=4426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:51.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461376631633639316132636239333762666534346666373532396465 Dec 16 03:23:51.551000 audit: BPF prog-id=178 op=LOAD Dec 16 03:23:51.551000 audit[4426]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4415 pid=4426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:51.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461376631633639316132636239333762666534346666373532396465 Dec 16 03:23:51.551000 audit: BPF prog-id=178 op=UNLOAD Dec 16 03:23:51.551000 audit[4426]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4415 pid=4426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:51.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461376631633639316132636239333762666534346666373532396465 Dec 16 03:23:51.551000 audit: BPF prog-id=177 op=UNLOAD Dec 16 03:23:51.551000 audit[4426]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4415 pid=4426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:51.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461376631633639316132636239333762666534346666373532396465 Dec 16 03:23:51.551000 audit: BPF prog-id=179 op=LOAD Dec 16 03:23:51.551000 audit[4426]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4415 pid=4426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:51.551000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461376631633639316132636239333762666534346666373532396465 Dec 16 03:23:51.581950 containerd[2508]: time="2025-12-16T03:23:51.581907704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f4f4b55c-rzxr9,Uid:7fafc5f3-4100-42b4-8c8a-92d1409a19d6,Namespace:calico-system,Attempt:0,} returns sandbox id \"da7f1c691a2cb937bfe44ff7529de80fd6c26821f12128f920649086e094e27f\"" Dec 16 03:23:51.583333 containerd[2508]: time="2025-12-16T03:23:51.583312652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 03:23:51.970000 audit[4451]: NETFILTER_CFG table=filter:122 family=2 entries=22 op=nft_register_rule pid=4451 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:23:51.975242 kernel: kauditd_printk_skb: 42 callbacks suppressed Dec 16 03:23:51.975326 kernel: audit: type=1325 audit(1765855431.970:569): table=filter:122 family=2 entries=22 op=nft_register_rule pid=4451 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:23:51.980506 kernel: audit: type=1300 audit(1765855431.970:569): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff77f08180 a2=0 a3=7fff77f0816c items=0 ppid=4100 pid=4451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:51.970000 audit[4451]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff77f08180 a2=0 a3=7fff77f0816c items=0 ppid=4100 pid=4451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:51.970000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:23:51.982633 kernel: audit: type=1327 audit(1765855431.970:569): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:23:51.979000 audit[4451]: NETFILTER_CFG table=nat:123 family=2 entries=12 op=nft_register_rule pid=4451 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:23:51.985498 kernel: audit: type=1325 audit(1765855431.979:570): table=nat:123 family=2 entries=12 op=nft_register_rule pid=4451 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:23:51.979000 audit[4451]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff77f08180 a2=0 a3=0 items=0 ppid=4100 pid=4451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:51.989772 kernel: audit: type=1300 audit(1765855431.979:570): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff77f08180 a2=0 a3=0 items=0 ppid=4100 pid=4451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:51.979000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:23:51.993471 kernel: audit: type=1327 audit(1765855431.979:570): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:23:52.737784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3569490211.mount: Deactivated successfully. Dec 16 03:23:52.911256 systemd[1]: Created slice kubepods-besteffort-pod610b4157_1b62_4b10_b5b8_50f3aa07f4a3.slice - libcontainer container kubepods-besteffort-pod610b4157_1b62_4b10_b5b8_50f3aa07f4a3.slice. Dec 16 03:23:53.001492 kubelet[3995]: I1216 03:23:53.001397 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/610b4157-1b62-4b10-b5b8-50f3aa07f4a3-cni-bin-dir\") pod \"calico-node-xh77j\" (UID: \"610b4157-1b62-4b10-b5b8-50f3aa07f4a3\") " pod="calico-system/calico-node-xh77j" Dec 16 03:23:53.001492 kubelet[3995]: I1216 03:23:53.001430 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/610b4157-1b62-4b10-b5b8-50f3aa07f4a3-var-lib-calico\") pod \"calico-node-xh77j\" (UID: \"610b4157-1b62-4b10-b5b8-50f3aa07f4a3\") " pod="calico-system/calico-node-xh77j" Dec 16 03:23:53.001492 kubelet[3995]: I1216 03:23:53.001447 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/610b4157-1b62-4b10-b5b8-50f3aa07f4a3-node-certs\") pod \"calico-node-xh77j\" (UID: \"610b4157-1b62-4b10-b5b8-50f3aa07f4a3\") " pod="calico-system/calico-node-xh77j" Dec 16 03:23:53.001492 kubelet[3995]: I1216 03:23:53.001462 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/610b4157-1b62-4b10-b5b8-50f3aa07f4a3-xtables-lock\") pod \"calico-node-xh77j\" (UID: \"610b4157-1b62-4b10-b5b8-50f3aa07f4a3\") " pod="calico-system/calico-node-xh77j" Dec 16 03:23:53.001492 kubelet[3995]: I1216 03:23:53.001482 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/610b4157-1b62-4b10-b5b8-50f3aa07f4a3-cni-log-dir\") pod \"calico-node-xh77j\" (UID: \"610b4157-1b62-4b10-b5b8-50f3aa07f4a3\") " pod="calico-system/calico-node-xh77j" Dec 16 03:23:53.001907 kubelet[3995]: I1216 03:23:53.001497 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/610b4157-1b62-4b10-b5b8-50f3aa07f4a3-cni-net-dir\") pod \"calico-node-xh77j\" (UID: \"610b4157-1b62-4b10-b5b8-50f3aa07f4a3\") " pod="calico-system/calico-node-xh77j" Dec 16 03:23:53.001907 kubelet[3995]: I1216 03:23:53.001514 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt2z7\" (UniqueName: \"kubernetes.io/projected/610b4157-1b62-4b10-b5b8-50f3aa07f4a3-kube-api-access-vt2z7\") pod \"calico-node-xh77j\" (UID: \"610b4157-1b62-4b10-b5b8-50f3aa07f4a3\") " pod="calico-system/calico-node-xh77j" Dec 16 03:23:53.001907 kubelet[3995]: I1216 03:23:53.001532 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/610b4157-1b62-4b10-b5b8-50f3aa07f4a3-lib-modules\") pod \"calico-node-xh77j\" (UID: \"610b4157-1b62-4b10-b5b8-50f3aa07f4a3\") " pod="calico-system/calico-node-xh77j" Dec 16 03:23:53.001907 kubelet[3995]: I1216 03:23:53.001557 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/610b4157-1b62-4b10-b5b8-50f3aa07f4a3-tigera-ca-bundle\") pod \"calico-node-xh77j\" (UID: \"610b4157-1b62-4b10-b5b8-50f3aa07f4a3\") " pod="calico-system/calico-node-xh77j" Dec 16 03:23:53.001907 kubelet[3995]: I1216 03:23:53.001576 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/610b4157-1b62-4b10-b5b8-50f3aa07f4a3-var-run-calico\") pod \"calico-node-xh77j\" (UID: \"610b4157-1b62-4b10-b5b8-50f3aa07f4a3\") " pod="calico-system/calico-node-xh77j" Dec 16 03:23:53.002026 kubelet[3995]: I1216 03:23:53.001596 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/610b4157-1b62-4b10-b5b8-50f3aa07f4a3-flexvol-driver-host\") pod \"calico-node-xh77j\" (UID: \"610b4157-1b62-4b10-b5b8-50f3aa07f4a3\") " pod="calico-system/calico-node-xh77j" Dec 16 03:23:53.002026 kubelet[3995]: I1216 03:23:53.001612 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/610b4157-1b62-4b10-b5b8-50f3aa07f4a3-policysync\") pod \"calico-node-xh77j\" (UID: \"610b4157-1b62-4b10-b5b8-50f3aa07f4a3\") " pod="calico-system/calico-node-xh77j" Dec 16 03:23:53.105164 kubelet[3995]: E1216 03:23:53.105110 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.105616 kubelet[3995]: W1216 03:23:53.105599 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.105867 kubelet[3995]: E1216 03:23:53.105851 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.106538 kubelet[3995]: E1216 03:23:53.106524 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.106651 kubelet[3995]: W1216 03:23:53.106640 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.106777 kubelet[3995]: E1216 03:23:53.106712 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.106910 kubelet[3995]: E1216 03:23:53.106904 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.106948 kubelet[3995]: W1216 03:23:53.106942 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.106997 kubelet[3995]: E1216 03:23:53.106990 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.107251 kubelet[3995]: E1216 03:23:53.107244 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.107306 kubelet[3995]: W1216 03:23:53.107299 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.107359 kubelet[3995]: E1216 03:23:53.107352 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.107576 kubelet[3995]: E1216 03:23:53.107525 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.107576 kubelet[3995]: W1216 03:23:53.107531 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.107576 kubelet[3995]: E1216 03:23:53.107539 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.107737 kubelet[3995]: E1216 03:23:53.107724 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.107785 kubelet[3995]: W1216 03:23:53.107778 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.107827 kubelet[3995]: E1216 03:23:53.107820 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.108020 kubelet[3995]: E1216 03:23:53.107958 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.108020 kubelet[3995]: W1216 03:23:53.107966 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.108020 kubelet[3995]: E1216 03:23:53.107973 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.108235 kubelet[3995]: E1216 03:23:53.108219 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.108275 kubelet[3995]: W1216 03:23:53.108269 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.108334 kubelet[3995]: E1216 03:23:53.108326 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.108540 kubelet[3995]: E1216 03:23:53.108534 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.108661 kubelet[3995]: W1216 03:23:53.108559 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.108661 kubelet[3995]: E1216 03:23:53.108567 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.113314 kubelet[3995]: E1216 03:23:53.113215 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.113832 kubelet[3995]: W1216 03:23:53.113809 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.113928 kubelet[3995]: E1216 03:23:53.113897 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.347130 containerd[2508]: time="2025-12-16T03:23:53.345510584Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:23:53.347790 containerd[2508]: time="2025-12-16T03:23:53.347760912Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Dec 16 03:23:53.350246 containerd[2508]: time="2025-12-16T03:23:53.350174184Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:23:53.353536 containerd[2508]: time="2025-12-16T03:23:53.353482036Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:23:53.354052 containerd[2508]: time="2025-12-16T03:23:53.353946953Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.770480156s" Dec 16 03:23:53.354052 containerd[2508]: time="2025-12-16T03:23:53.353977440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 16 03:23:53.373379 kubelet[3995]: E1216 03:23:53.373266 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-srg9b" podUID="52f35797-5a94-4b5f-8ac7-147ca2758736" Dec 16 03:23:53.376306 kubelet[3995]: E1216 03:23:53.376175 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.376643 kubelet[3995]: W1216 03:23:53.376424 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.376643 kubelet[3995]: E1216 03:23:53.376445 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.386613 containerd[2508]: time="2025-12-16T03:23:53.386586042Z" level=info msg="CreateContainer within sandbox \"da7f1c691a2cb937bfe44ff7529de80fd6c26821f12128f920649086e094e27f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 03:23:53.388292 kubelet[3995]: E1216 03:23:53.388244 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.388292 kubelet[3995]: W1216 03:23:53.388264 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.388292 kubelet[3995]: E1216 03:23:53.388280 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.388995 kubelet[3995]: E1216 03:23:53.388974 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.388995 kubelet[3995]: W1216 03:23:53.388994 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.389283 kubelet[3995]: E1216 03:23:53.389008 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.389283 kubelet[3995]: E1216 03:23:53.389208 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.389283 kubelet[3995]: W1216 03:23:53.389216 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.389283 kubelet[3995]: E1216 03:23:53.389226 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.390221 kubelet[3995]: E1216 03:23:53.390194 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.390221 kubelet[3995]: W1216 03:23:53.390244 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.390569 kubelet[3995]: E1216 03:23:53.390260 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.390692 kubelet[3995]: E1216 03:23:53.390680 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.390821 kubelet[3995]: W1216 03:23:53.390695 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.390821 kubelet[3995]: E1216 03:23:53.390707 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.390884 kubelet[3995]: E1216 03:23:53.390844 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.390884 kubelet[3995]: W1216 03:23:53.390850 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.390884 kubelet[3995]: E1216 03:23:53.390858 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.391798 kubelet[3995]: E1216 03:23:53.391780 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.391798 kubelet[3995]: W1216 03:23:53.391795 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.391798 kubelet[3995]: E1216 03:23:53.391808 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.392015 kubelet[3995]: E1216 03:23:53.391946 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.392015 kubelet[3995]: W1216 03:23:53.391952 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.392015 kubelet[3995]: E1216 03:23:53.391963 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.392175 kubelet[3995]: E1216 03:23:53.392086 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.392175 kubelet[3995]: W1216 03:23:53.392091 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.392175 kubelet[3995]: E1216 03:23:53.392098 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.392315 kubelet[3995]: E1216 03:23:53.392308 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.392369 kubelet[3995]: W1216 03:23:53.392315 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.392369 kubelet[3995]: E1216 03:23:53.392323 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.392501 kubelet[3995]: E1216 03:23:53.392424 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.392501 kubelet[3995]: W1216 03:23:53.392430 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.392501 kubelet[3995]: E1216 03:23:53.392436 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.392594 kubelet[3995]: E1216 03:23:53.392518 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.392594 kubelet[3995]: W1216 03:23:53.392523 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.392594 kubelet[3995]: E1216 03:23:53.392530 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.392791 kubelet[3995]: E1216 03:23:53.392780 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.392791 kubelet[3995]: W1216 03:23:53.392789 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.392852 kubelet[3995]: E1216 03:23:53.392795 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.393254 kubelet[3995]: E1216 03:23:53.393242 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.393254 kubelet[3995]: W1216 03:23:53.393251 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.393341 kubelet[3995]: E1216 03:23:53.393259 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.393469 kubelet[3995]: E1216 03:23:53.393441 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.393469 kubelet[3995]: W1216 03:23:53.393451 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.393469 kubelet[3995]: E1216 03:23:53.393458 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.394873 kubelet[3995]: E1216 03:23:53.394842 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.394873 kubelet[3995]: W1216 03:23:53.394856 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.394873 kubelet[3995]: E1216 03:23:53.394868 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.395193 kubelet[3995]: E1216 03:23:53.395179 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.395193 kubelet[3995]: W1216 03:23:53.395193 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.395268 kubelet[3995]: E1216 03:23:53.395204 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.395387 kubelet[3995]: E1216 03:23:53.395379 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.395387 kubelet[3995]: W1216 03:23:53.395387 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.395446 kubelet[3995]: E1216 03:23:53.395395 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.395518 kubelet[3995]: E1216 03:23:53.395508 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.395551 kubelet[3995]: W1216 03:23:53.395519 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.395551 kubelet[3995]: E1216 03:23:53.395526 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.395651 kubelet[3995]: E1216 03:23:53.395638 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.395651 kubelet[3995]: W1216 03:23:53.395644 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.395811 kubelet[3995]: E1216 03:23:53.395651 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.402173 containerd[2508]: time="2025-12-16T03:23:53.401599658Z" level=info msg="Container e597c5e89aa2196025fc816403ec967a5efe40da0b9651fac48a810bc0fff460: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:23:53.405126 kubelet[3995]: E1216 03:23:53.405100 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.405621 kubelet[3995]: W1216 03:23:53.405188 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.405621 kubelet[3995]: E1216 03:23:53.405209 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.405621 kubelet[3995]: I1216 03:23:53.405232 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/52f35797-5a94-4b5f-8ac7-147ca2758736-registration-dir\") pod \"csi-node-driver-srg9b\" (UID: \"52f35797-5a94-4b5f-8ac7-147ca2758736\") " pod="calico-system/csi-node-driver-srg9b" Dec 16 03:23:53.405621 kubelet[3995]: E1216 03:23:53.405364 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.405621 kubelet[3995]: W1216 03:23:53.405372 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.405621 kubelet[3995]: E1216 03:23:53.405380 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.405621 kubelet[3995]: I1216 03:23:53.405401 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/52f35797-5a94-4b5f-8ac7-147ca2758736-kubelet-dir\") pod \"csi-node-driver-srg9b\" (UID: \"52f35797-5a94-4b5f-8ac7-147ca2758736\") " pod="calico-system/csi-node-driver-srg9b" Dec 16 03:23:53.405621 kubelet[3995]: E1216 03:23:53.405518 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.405856 kubelet[3995]: W1216 03:23:53.405525 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.405856 kubelet[3995]: E1216 03:23:53.405532 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.405856 kubelet[3995]: E1216 03:23:53.405661 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.405856 kubelet[3995]: W1216 03:23:53.405667 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.405856 kubelet[3995]: E1216 03:23:53.405675 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.405856 kubelet[3995]: E1216 03:23:53.405783 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.405856 kubelet[3995]: W1216 03:23:53.405788 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.405856 kubelet[3995]: E1216 03:23:53.405795 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.405856 kubelet[3995]: I1216 03:23:53.405819 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/52f35797-5a94-4b5f-8ac7-147ca2758736-socket-dir\") pod \"csi-node-driver-srg9b\" (UID: \"52f35797-5a94-4b5f-8ac7-147ca2758736\") " pod="calico-system/csi-node-driver-srg9b" Dec 16 03:23:53.406081 kubelet[3995]: E1216 03:23:53.405919 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.406081 kubelet[3995]: W1216 03:23:53.405942 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.406081 kubelet[3995]: E1216 03:23:53.405949 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.406081 kubelet[3995]: I1216 03:23:53.405970 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/52f35797-5a94-4b5f-8ac7-147ca2758736-varrun\") pod \"csi-node-driver-srg9b\" (UID: \"52f35797-5a94-4b5f-8ac7-147ca2758736\") " pod="calico-system/csi-node-driver-srg9b" Dec 16 03:23:53.406081 kubelet[3995]: E1216 03:23:53.406072 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.406319 kubelet[3995]: W1216 03:23:53.406090 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.406319 kubelet[3995]: E1216 03:23:53.406097 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.406319 kubelet[3995]: I1216 03:23:53.406116 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ks26\" (UniqueName: \"kubernetes.io/projected/52f35797-5a94-4b5f-8ac7-147ca2758736-kube-api-access-9ks26\") pod \"csi-node-driver-srg9b\" (UID: \"52f35797-5a94-4b5f-8ac7-147ca2758736\") " pod="calico-system/csi-node-driver-srg9b" Dec 16 03:23:53.406319 kubelet[3995]: E1216 03:23:53.406276 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.406319 kubelet[3995]: W1216 03:23:53.406283 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.406319 kubelet[3995]: E1216 03:23:53.406291 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.406516 kubelet[3995]: E1216 03:23:53.406389 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.406516 kubelet[3995]: W1216 03:23:53.406394 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.406516 kubelet[3995]: E1216 03:23:53.406400 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.406623 kubelet[3995]: E1216 03:23:53.406525 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.406623 kubelet[3995]: W1216 03:23:53.406530 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.406623 kubelet[3995]: E1216 03:23:53.406536 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.406623 kubelet[3995]: E1216 03:23:53.406624 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.406851 kubelet[3995]: W1216 03:23:53.406629 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.406851 kubelet[3995]: E1216 03:23:53.406634 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.406851 kubelet[3995]: E1216 03:23:53.406748 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.406851 kubelet[3995]: W1216 03:23:53.406754 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.406851 kubelet[3995]: E1216 03:23:53.406760 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.406996 kubelet[3995]: E1216 03:23:53.406870 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.406996 kubelet[3995]: W1216 03:23:53.406877 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.406996 kubelet[3995]: E1216 03:23:53.406884 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.407072 kubelet[3995]: E1216 03:23:53.407018 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.407072 kubelet[3995]: W1216 03:23:53.407023 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.407072 kubelet[3995]: E1216 03:23:53.407030 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.407171 kubelet[3995]: E1216 03:23:53.407122 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.407171 kubelet[3995]: W1216 03:23:53.407127 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.407171 kubelet[3995]: E1216 03:23:53.407155 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.413809 containerd[2508]: time="2025-12-16T03:23:53.413784019Z" level=info msg="CreateContainer within sandbox \"da7f1c691a2cb937bfe44ff7529de80fd6c26821f12128f920649086e094e27f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e597c5e89aa2196025fc816403ec967a5efe40da0b9651fac48a810bc0fff460\"" Dec 16 03:23:53.414251 containerd[2508]: time="2025-12-16T03:23:53.414225802Z" level=info msg="StartContainer for \"e597c5e89aa2196025fc816403ec967a5efe40da0b9651fac48a810bc0fff460\"" Dec 16 03:23:53.415448 containerd[2508]: time="2025-12-16T03:23:53.415423500Z" level=info msg="connecting to shim e597c5e89aa2196025fc816403ec967a5efe40da0b9651fac48a810bc0fff460" address="unix:///run/containerd/s/dd4c04c21b51146c412ce398c14fd9a2ca4395ca2403a161b2f762433462f427" protocol=ttrpc version=3 Dec 16 03:23:53.435306 systemd[1]: Started cri-containerd-e597c5e89aa2196025fc816403ec967a5efe40da0b9651fac48a810bc0fff460.scope - libcontainer container e597c5e89aa2196025fc816403ec967a5efe40da0b9651fac48a810bc0fff460. Dec 16 03:23:53.444000 audit: BPF prog-id=180 op=LOAD Dec 16 03:23:53.444000 audit: BPF prog-id=181 op=LOAD Dec 16 03:23:53.449292 kernel: audit: type=1334 audit(1765855433.444:571): prog-id=180 op=LOAD Dec 16 03:23:53.449345 kernel: audit: type=1334 audit(1765855433.444:572): prog-id=181 op=LOAD Dec 16 03:23:53.444000 audit[4530]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4415 pid=4530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:53.454677 kernel: audit: type=1300 audit(1765855433.444:572): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4415 pid=4530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:53.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535393763356538396161323139363032356663383136343033656339 Dec 16 03:23:53.465431 kernel: audit: type=1327 audit(1765855433.444:572): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535393763356538396161323139363032356663383136343033656339 Dec 16 03:23:53.444000 audit: BPF prog-id=181 op=UNLOAD Dec 16 03:23:53.444000 audit[4530]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4415 pid=4530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:53.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535393763356538396161323139363032356663383136343033656339 Dec 16 03:23:53.444000 audit: BPF prog-id=182 op=LOAD Dec 16 03:23:53.444000 audit[4530]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4415 pid=4530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:53.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535393763356538396161323139363032356663383136343033656339 Dec 16 03:23:53.444000 audit: BPF prog-id=183 op=LOAD Dec 16 03:23:53.444000 audit[4530]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4415 pid=4530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:53.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535393763356538396161323139363032356663383136343033656339 Dec 16 03:23:53.444000 audit: BPF prog-id=183 op=UNLOAD Dec 16 03:23:53.444000 audit[4530]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4415 pid=4530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:53.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535393763356538396161323139363032356663383136343033656339 Dec 16 03:23:53.444000 audit: BPF prog-id=182 op=UNLOAD Dec 16 03:23:53.444000 audit[4530]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4415 pid=4530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:53.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535393763356538396161323139363032356663383136343033656339 Dec 16 03:23:53.444000 audit: BPF prog-id=184 op=LOAD Dec 16 03:23:53.444000 audit[4530]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4415 pid=4530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:53.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535393763356538396161323139363032356663383136343033656339 Dec 16 03:23:53.491240 containerd[2508]: time="2025-12-16T03:23:53.491200566Z" level=info msg="StartContainer for \"e597c5e89aa2196025fc816403ec967a5efe40da0b9651fac48a810bc0fff460\" returns successfully" Dec 16 03:23:53.507080 kubelet[3995]: E1216 03:23:53.507055 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.507080 kubelet[3995]: W1216 03:23:53.507071 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.507306 kubelet[3995]: E1216 03:23:53.507085 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.507306 kubelet[3995]: E1216 03:23:53.507245 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.507306 kubelet[3995]: W1216 03:23:53.507251 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.507306 kubelet[3995]: E1216 03:23:53.507260 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.507589 kubelet[3995]: E1216 03:23:53.507374 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.507589 kubelet[3995]: W1216 03:23:53.507379 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.507589 kubelet[3995]: E1216 03:23:53.507385 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.507589 kubelet[3995]: E1216 03:23:53.507496 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.507589 kubelet[3995]: W1216 03:23:53.507501 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.507589 kubelet[3995]: E1216 03:23:53.507508 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.507984 kubelet[3995]: E1216 03:23:53.507611 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.507984 kubelet[3995]: W1216 03:23:53.507615 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.507984 kubelet[3995]: E1216 03:23:53.507622 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.507984 kubelet[3995]: E1216 03:23:53.507905 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.507984 kubelet[3995]: W1216 03:23:53.507915 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.507984 kubelet[3995]: E1216 03:23:53.507924 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.508326 kubelet[3995]: E1216 03:23:53.508254 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.508326 kubelet[3995]: W1216 03:23:53.508260 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.508326 kubelet[3995]: E1216 03:23:53.508267 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.508500 kubelet[3995]: E1216 03:23:53.508494 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.508625 kubelet[3995]: W1216 03:23:53.508532 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.508625 kubelet[3995]: E1216 03:23:53.508555 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.508764 kubelet[3995]: E1216 03:23:53.508746 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.508842 kubelet[3995]: W1216 03:23:53.508799 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.508842 kubelet[3995]: E1216 03:23:53.508810 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.509034 kubelet[3995]: E1216 03:23:53.509023 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.509167 kubelet[3995]: W1216 03:23:53.509079 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.509167 kubelet[3995]: E1216 03:23:53.509089 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.509390 kubelet[3995]: E1216 03:23:53.509333 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.509390 kubelet[3995]: W1216 03:23:53.509341 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.509390 kubelet[3995]: E1216 03:23:53.509349 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.509608 kubelet[3995]: E1216 03:23:53.509601 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.509705 kubelet[3995]: W1216 03:23:53.509672 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.509705 kubelet[3995]: E1216 03:23:53.509685 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.509994 kubelet[3995]: E1216 03:23:53.509974 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.510106 kubelet[3995]: W1216 03:23:53.510043 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.510106 kubelet[3995]: E1216 03:23:53.510069 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.510406 kubelet[3995]: E1216 03:23:53.510359 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.510406 kubelet[3995]: W1216 03:23:53.510386 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.510406 kubelet[3995]: E1216 03:23:53.510396 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.510684 kubelet[3995]: E1216 03:23:53.510667 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.510763 kubelet[3995]: W1216 03:23:53.510720 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.510763 kubelet[3995]: E1216 03:23:53.510731 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.511012 kubelet[3995]: E1216 03:23:53.510948 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.511012 kubelet[3995]: W1216 03:23:53.510969 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.511012 kubelet[3995]: E1216 03:23:53.510982 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.511303 kubelet[3995]: E1216 03:23:53.511288 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.511349 kubelet[3995]: W1216 03:23:53.511321 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.511349 kubelet[3995]: E1216 03:23:53.511332 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.511498 kubelet[3995]: E1216 03:23:53.511487 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.511498 kubelet[3995]: W1216 03:23:53.511495 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.511554 kubelet[3995]: E1216 03:23:53.511502 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.511695 kubelet[3995]: E1216 03:23:53.511683 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.511723 kubelet[3995]: W1216 03:23:53.511696 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.511723 kubelet[3995]: E1216 03:23:53.511704 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.511854 kubelet[3995]: E1216 03:23:53.511844 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.511885 kubelet[3995]: W1216 03:23:53.511863 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.511885 kubelet[3995]: E1216 03:23:53.511870 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.512036 kubelet[3995]: E1216 03:23:53.512026 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.512036 kubelet[3995]: W1216 03:23:53.512033 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.512083 kubelet[3995]: E1216 03:23:53.512040 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.512303 kubelet[3995]: E1216 03:23:53.512291 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.512303 kubelet[3995]: W1216 03:23:53.512300 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.512372 kubelet[3995]: E1216 03:23:53.512308 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.513091 kubelet[3995]: E1216 03:23:53.512709 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.513091 kubelet[3995]: W1216 03:23:53.512721 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.513091 kubelet[3995]: E1216 03:23:53.512737 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.513091 kubelet[3995]: E1216 03:23:53.512901 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.513091 kubelet[3995]: W1216 03:23:53.512908 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.513091 kubelet[3995]: E1216 03:23:53.512915 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.513091 kubelet[3995]: E1216 03:23:53.513082 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.513091 kubelet[3995]: W1216 03:23:53.513089 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.513091 kubelet[3995]: E1216 03:23:53.513096 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:53.513741 containerd[2508]: time="2025-12-16T03:23:53.513676278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xh77j,Uid:610b4157-1b62-4b10-b5b8-50f3aa07f4a3,Namespace:calico-system,Attempt:0,}" Dec 16 03:23:53.549812 containerd[2508]: time="2025-12-16T03:23:53.549782818Z" level=info msg="connecting to shim 48e321c98100f2ed3c6f9e767e2f524dfc90664b6f34927e1c56a3dae0a508cc" address="unix:///run/containerd/s/8db59c9c5516f10af1d1a8f8d33db2c2403f0afee1edeac1589763d59787899f" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:23:53.579306 systemd[1]: Started cri-containerd-48e321c98100f2ed3c6f9e767e2f524dfc90664b6f34927e1c56a3dae0a508cc.scope - libcontainer container 48e321c98100f2ed3c6f9e767e2f524dfc90664b6f34927e1c56a3dae0a508cc. Dec 16 03:23:53.586000 audit: BPF prog-id=185 op=LOAD Dec 16 03:23:53.587000 audit: BPF prog-id=186 op=LOAD Dec 16 03:23:53.587000 audit[4602]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4591 pid=4602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:53.587000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438653332316339383130306632656433633666396537363765326635 Dec 16 03:23:53.587000 audit: BPF prog-id=186 op=UNLOAD Dec 16 03:23:53.587000 audit[4602]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4591 pid=4602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:53.587000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438653332316339383130306632656433633666396537363765326635 Dec 16 03:23:53.587000 audit: BPF prog-id=187 op=LOAD Dec 16 03:23:53.587000 audit[4602]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4591 pid=4602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:53.587000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438653332316339383130306632656433633666396537363765326635 Dec 16 03:23:53.587000 audit: BPF prog-id=188 op=LOAD Dec 16 03:23:53.587000 audit[4602]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4591 pid=4602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:53.587000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438653332316339383130306632656433633666396537363765326635 Dec 16 03:23:53.587000 audit: BPF prog-id=188 op=UNLOAD Dec 16 03:23:53.587000 audit[4602]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4591 pid=4602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:53.587000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438653332316339383130306632656433633666396537363765326635 Dec 16 03:23:53.587000 audit: BPF prog-id=187 op=UNLOAD Dec 16 03:23:53.587000 audit[4602]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4591 pid=4602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:53.587000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438653332316339383130306632656433633666396537363765326635 Dec 16 03:23:53.587000 audit: BPF prog-id=189 op=LOAD Dec 16 03:23:53.587000 audit[4602]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4591 pid=4602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:53.587000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438653332316339383130306632656433633666396537363765326635 Dec 16 03:23:53.601539 containerd[2508]: time="2025-12-16T03:23:53.601460107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xh77j,Uid:610b4157-1b62-4b10-b5b8-50f3aa07f4a3,Namespace:calico-system,Attempt:0,} returns sandbox id \"48e321c98100f2ed3c6f9e767e2f524dfc90664b6f34927e1c56a3dae0a508cc\"" Dec 16 03:23:53.606846 containerd[2508]: time="2025-12-16T03:23:53.606791881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 03:23:53.661228 kubelet[3995]: E1216 03:23:53.660339 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:53.661228 kubelet[3995]: W1216 03:23:53.661165 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:53.661228 kubelet[3995]: E1216 03:23:53.661191 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:54.503847 kubelet[3995]: E1216 03:23:54.503813 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:54.504719 kubelet[3995]: W1216 03:23:54.503926 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:54.504719 kubelet[3995]: E1216 03:23:54.503949 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:54.504719 kubelet[3995]: E1216 03:23:54.504524 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:54.504719 kubelet[3995]: W1216 03:23:54.504538 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:54.504719 kubelet[3995]: E1216 03:23:54.504553 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:54.505071 kubelet[3995]: E1216 03:23:54.504986 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:54.505071 kubelet[3995]: W1216 03:23:54.505000 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:54.505071 kubelet[3995]: E1216 03:23:54.505013 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:54.505384 kubelet[3995]: E1216 03:23:54.505327 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:54.505384 kubelet[3995]: W1216 03:23:54.505337 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:54.505384 kubelet[3995]: E1216 03:23:54.505348 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:54.505634 kubelet[3995]: E1216 03:23:54.505594 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:54.505634 kubelet[3995]: W1216 03:23:54.505603 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:54.505634 kubelet[3995]: E1216 03:23:54.505616 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:54.505932 kubelet[3995]: E1216 03:23:54.505902 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:54.505932 kubelet[3995]: W1216 03:23:54.505911 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:54.506097 kubelet[3995]: E1216 03:23:54.506018 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:54.506286 kubelet[3995]: E1216 03:23:54.506249 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:54.506286 kubelet[3995]: W1216 03:23:54.506258 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:54.506286 kubelet[3995]: E1216 03:23:54.506269 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:54.506558 kubelet[3995]: E1216 03:23:54.506516 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:54.506558 kubelet[3995]: W1216 03:23:54.506524 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:54.506558 kubelet[3995]: E1216 03:23:54.506532 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:54.506799 kubelet[3995]: E1216 03:23:54.506759 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:54.506799 kubelet[3995]: W1216 03:23:54.506766 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:54.506799 kubelet[3995]: E1216 03:23:54.506773 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:54.507022 kubelet[3995]: E1216 03:23:54.506984 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:54.507022 kubelet[3995]: W1216 03:23:54.506992 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:54.507022 kubelet[3995]: E1216 03:23:54.506999 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:54.507347 kubelet[3995]: E1216 03:23:54.507224 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:54.507347 kubelet[3995]: W1216 03:23:54.507233 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:54.507347 kubelet[3995]: E1216 03:23:54.507243 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:54.507673 kubelet[3995]: E1216 03:23:54.507637 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:54.507673 kubelet[3995]: W1216 03:23:54.507649 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:54.507826 kubelet[3995]: E1216 03:23:54.507661 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:54.508274 kubelet[3995]: E1216 03:23:54.508213 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:54.508274 kubelet[3995]: W1216 03:23:54.508227 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:54.508274 kubelet[3995]: E1216 03:23:54.508239 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:54.508629 kubelet[3995]: E1216 03:23:54.508538 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:54.508629 kubelet[3995]: W1216 03:23:54.508554 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:54.508629 kubelet[3995]: E1216 03:23:54.508565 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:54.508841 kubelet[3995]: E1216 03:23:54.508794 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:54.508841 kubelet[3995]: W1216 03:23:54.508803 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:54.508841 kubelet[3995]: E1216 03:23:54.508813 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:54.513758 kubelet[3995]: E1216 03:23:54.513719 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:54.513758 kubelet[3995]: W1216 03:23:54.513734 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:54.514232 kubelet[3995]: E1216 03:23:54.513908 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:54.514666 kubelet[3995]: E1216 03:23:54.514624 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:54.514666 kubelet[3995]: W1216 03:23:54.514637 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:54.514666 kubelet[3995]: E1216 03:23:54.514653 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:54.515401 kubelet[3995]: E1216 03:23:54.515364 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:54.515401 kubelet[3995]: W1216 03:23:54.515378 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:54.515542 kubelet[3995]: E1216 03:23:54.515390 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:54.515776 kubelet[3995]: E1216 03:23:54.515768 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:54.515896 kubelet[3995]: W1216 03:23:54.515823 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:54.515896 kubelet[3995]: E1216 03:23:54.515835 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:54.516431 kubelet[3995]: E1216 03:23:54.516374 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:54.516431 kubelet[3995]: W1216 03:23:54.516406 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:54.516431 kubelet[3995]: E1216 03:23:54.516418 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:54.516827 kubelet[3995]: E1216 03:23:54.516797 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:54.516827 kubelet[3995]: W1216 03:23:54.516807 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:54.516827 kubelet[3995]: E1216 03:23:54.516816 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:54.517157 kubelet[3995]: E1216 03:23:54.517122 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:54.517157 kubelet[3995]: W1216 03:23:54.517130 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:54.517245 kubelet[3995]: E1216 03:23:54.517236 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:54.517456 kubelet[3995]: E1216 03:23:54.517429 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:54.517456 kubelet[3995]: W1216 03:23:54.517437 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:54.517456 kubelet[3995]: E1216 03:23:54.517445 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:54.517739 kubelet[3995]: E1216 03:23:54.517724 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:54.517859 kubelet[3995]: W1216 03:23:54.517810 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:54.517890 kubelet[3995]: E1216 03:23:54.517853 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:54.518154 kubelet[3995]: E1216 03:23:54.518065 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:54.518154 kubelet[3995]: W1216 03:23:54.518075 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:54.518154 kubelet[3995]: E1216 03:23:54.518091 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:54.518399 kubelet[3995]: E1216 03:23:54.518371 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:54.518399 kubelet[3995]: W1216 03:23:54.518395 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:54.518482 kubelet[3995]: E1216 03:23:54.518406 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:54.518599 kubelet[3995]: E1216 03:23:54.518592 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:54.518630 kubelet[3995]: W1216 03:23:54.518600 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:54.518630 kubelet[3995]: E1216 03:23:54.518608 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:54.518861 kubelet[3995]: E1216 03:23:54.518838 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:54.518861 kubelet[3995]: W1216 03:23:54.518857 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:54.518919 kubelet[3995]: E1216 03:23:54.518866 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:54.519023 kubelet[3995]: E1216 03:23:54.518999 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:54.519023 kubelet[3995]: W1216 03:23:54.519020 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:54.519076 kubelet[3995]: E1216 03:23:54.519027 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:54.519190 kubelet[3995]: E1216 03:23:54.519177 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:54.519190 kubelet[3995]: W1216 03:23:54.519185 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:54.519255 kubelet[3995]: E1216 03:23:54.519191 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:54.519296 kubelet[3995]: E1216 03:23:54.519283 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:54.519296 kubelet[3995]: W1216 03:23:54.519292 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:54.519355 kubelet[3995]: E1216 03:23:54.519299 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:54.519457 kubelet[3995]: E1216 03:23:54.519439 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:54.519457 kubelet[3995]: W1216 03:23:54.519452 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:54.519510 kubelet[3995]: E1216 03:23:54.519459 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:54.519763 kubelet[3995]: E1216 03:23:54.519748 3995 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:23:54.519763 kubelet[3995]: W1216 03:23:54.519759 3995 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:23:54.519823 kubelet[3995]: E1216 03:23:54.519768 3995 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:23:54.727221 containerd[2508]: time="2025-12-16T03:23:54.727173015Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:23:54.729611 containerd[2508]: time="2025-12-16T03:23:54.729575492Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 16 03:23:54.735042 containerd[2508]: time="2025-12-16T03:23:54.734250728Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:23:54.737825 containerd[2508]: time="2025-12-16T03:23:54.737794645Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:23:54.738166 containerd[2508]: time="2025-12-16T03:23:54.738121585Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.130928617s" Dec 16 03:23:54.738215 containerd[2508]: time="2025-12-16T03:23:54.738172742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 16 03:23:54.744238 containerd[2508]: time="2025-12-16T03:23:54.744208796Z" level=info msg="CreateContainer within sandbox \"48e321c98100f2ed3c6f9e767e2f524dfc90664b6f34927e1c56a3dae0a508cc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 03:23:54.756711 kubelet[3995]: I1216 03:23:54.756329 3995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-f4f4b55c-rzxr9" podStartSLOduration=2.983944866 podStartE2EDuration="4.756310854s" podCreationTimestamp="2025-12-16 03:23:50 +0000 UTC" firstStartedPulling="2025-12-16 03:23:51.582956375 +0000 UTC m=+41.315298819" lastFinishedPulling="2025-12-16 03:23:53.355322354 +0000 UTC m=+43.087664807" observedRunningTime="2025-12-16 03:23:54.755992541 +0000 UTC m=+44.488334992" watchObservedRunningTime="2025-12-16 03:23:54.756310854 +0000 UTC m=+44.488653307" Dec 16 03:23:54.762561 containerd[2508]: time="2025-12-16T03:23:54.762393669Z" level=info msg="Container f2de93a6506419384fa19e441a2bff5ee98b9f633577d58de93157a3157064b1: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:23:54.776775 containerd[2508]: time="2025-12-16T03:23:54.776750714Z" level=info msg="CreateContainer within sandbox \"48e321c98100f2ed3c6f9e767e2f524dfc90664b6f34927e1c56a3dae0a508cc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f2de93a6506419384fa19e441a2bff5ee98b9f633577d58de93157a3157064b1\"" Dec 16 03:23:54.777102 containerd[2508]: time="2025-12-16T03:23:54.777081842Z" level=info msg="StartContainer for \"f2de93a6506419384fa19e441a2bff5ee98b9f633577d58de93157a3157064b1\"" Dec 16 03:23:54.778128 containerd[2508]: time="2025-12-16T03:23:54.778102527Z" level=info msg="connecting to shim f2de93a6506419384fa19e441a2bff5ee98b9f633577d58de93157a3157064b1" address="unix:///run/containerd/s/8db59c9c5516f10af1d1a8f8d33db2c2403f0afee1edeac1589763d59787899f" protocol=ttrpc version=3 Dec 16 03:23:54.803312 systemd[1]: Started cri-containerd-f2de93a6506419384fa19e441a2bff5ee98b9f633577d58de93157a3157064b1.scope - libcontainer container f2de93a6506419384fa19e441a2bff5ee98b9f633577d58de93157a3157064b1. Dec 16 03:23:54.839000 audit: BPF prog-id=190 op=LOAD Dec 16 03:23:54.839000 audit[4682]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4591 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:54.839000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632646539336136353036343139333834666131396534343161326266 Dec 16 03:23:54.840000 audit: BPF prog-id=191 op=LOAD Dec 16 03:23:54.840000 audit[4682]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4591 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:54.840000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632646539336136353036343139333834666131396534343161326266 Dec 16 03:23:54.840000 audit: BPF prog-id=191 op=UNLOAD Dec 16 03:23:54.840000 audit[4682]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4591 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:54.840000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632646539336136353036343139333834666131396534343161326266 Dec 16 03:23:54.840000 audit: BPF prog-id=190 op=UNLOAD Dec 16 03:23:54.840000 audit[4682]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4591 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:54.840000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632646539336136353036343139333834666131396534343161326266 Dec 16 03:23:54.840000 audit: BPF prog-id=192 op=LOAD Dec 16 03:23:54.840000 audit[4682]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4591 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:54.840000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632646539336136353036343139333834666131396534343161326266 Dec 16 03:23:54.860898 containerd[2508]: time="2025-12-16T03:23:54.860835930Z" level=info msg="StartContainer for \"f2de93a6506419384fa19e441a2bff5ee98b9f633577d58de93157a3157064b1\" returns successfully" Dec 16 03:23:54.864497 systemd[1]: cri-containerd-f2de93a6506419384fa19e441a2bff5ee98b9f633577d58de93157a3157064b1.scope: Deactivated successfully. Dec 16 03:23:54.866000 audit: BPF prog-id=192 op=UNLOAD Dec 16 03:23:54.868029 containerd[2508]: time="2025-12-16T03:23:54.868000739Z" level=info msg="received container exit event container_id:\"f2de93a6506419384fa19e441a2bff5ee98b9f633577d58de93157a3157064b1\" id:\"f2de93a6506419384fa19e441a2bff5ee98b9f633577d58de93157a3157064b1\" pid:4694 exited_at:{seconds:1765855434 nanos:867607284}" Dec 16 03:23:54.891607 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2de93a6506419384fa19e441a2bff5ee98b9f633577d58de93157a3157064b1-rootfs.mount: Deactivated successfully. Dec 16 03:23:55.345537 kubelet[3995]: E1216 03:23:55.345459 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-srg9b" podUID="52f35797-5a94-4b5f-8ac7-147ca2758736" Dec 16 03:23:56.479961 containerd[2508]: time="2025-12-16T03:23:56.479617165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 03:23:56.804000 audit[4730]: NETFILTER_CFG table=filter:124 family=2 entries=21 op=nft_register_rule pid=4730 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:23:56.804000 audit[4730]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe19e9d490 a2=0 a3=7ffe19e9d47c items=0 ppid=4100 pid=4730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:56.804000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:23:56.813000 audit[4730]: NETFILTER_CFG table=nat:125 family=2 entries=19 op=nft_register_chain pid=4730 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:23:56.813000 audit[4730]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe19e9d490 a2=0 a3=7ffe19e9d47c items=0 ppid=4100 pid=4730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:23:56.813000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:23:57.345376 kubelet[3995]: E1216 03:23:57.345326 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-srg9b" podUID="52f35797-5a94-4b5f-8ac7-147ca2758736" Dec 16 03:23:59.345332 kubelet[3995]: E1216 03:23:59.345287 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-srg9b" podUID="52f35797-5a94-4b5f-8ac7-147ca2758736" Dec 16 03:23:59.946829 containerd[2508]: time="2025-12-16T03:23:59.946786730Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:23:59.949116 containerd[2508]: time="2025-12-16T03:23:59.949079231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Dec 16 03:23:59.951584 containerd[2508]: time="2025-12-16T03:23:59.951537880Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:23:59.954672 containerd[2508]: time="2025-12-16T03:23:59.954579928Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:23:59.955155 containerd[2508]: time="2025-12-16T03:23:59.954988297Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.475327685s" Dec 16 03:23:59.955155 containerd[2508]: time="2025-12-16T03:23:59.955015804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 16 03:23:59.960531 containerd[2508]: time="2025-12-16T03:23:59.960505103Z" level=info msg="CreateContainer within sandbox \"48e321c98100f2ed3c6f9e767e2f524dfc90664b6f34927e1c56a3dae0a508cc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 03:23:59.978789 containerd[2508]: time="2025-12-16T03:23:59.978761143Z" level=info msg="Container 10c7c9edb3fead28621b9fd9e508acb7f5837704af4d4b67a2234c393206e1c7: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:23:59.997241 containerd[2508]: time="2025-12-16T03:23:59.997128979Z" level=info msg="CreateContainer within sandbox \"48e321c98100f2ed3c6f9e767e2f524dfc90664b6f34927e1c56a3dae0a508cc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"10c7c9edb3fead28621b9fd9e508acb7f5837704af4d4b67a2234c393206e1c7\"" Dec 16 03:23:59.997616 containerd[2508]: time="2025-12-16T03:23:59.997592926Z" level=info msg="StartContainer for \"10c7c9edb3fead28621b9fd9e508acb7f5837704af4d4b67a2234c393206e1c7\"" Dec 16 03:23:59.998815 containerd[2508]: time="2025-12-16T03:23:59.998723691Z" level=info msg="connecting to shim 10c7c9edb3fead28621b9fd9e508acb7f5837704af4d4b67a2234c393206e1c7" address="unix:///run/containerd/s/8db59c9c5516f10af1d1a8f8d33db2c2403f0afee1edeac1589763d59787899f" protocol=ttrpc version=3 Dec 16 03:24:00.018330 systemd[1]: Started cri-containerd-10c7c9edb3fead28621b9fd9e508acb7f5837704af4d4b67a2234c393206e1c7.scope - libcontainer container 10c7c9edb3fead28621b9fd9e508acb7f5837704af4d4b67a2234c393206e1c7. Dec 16 03:24:00.052000 audit: BPF prog-id=193 op=LOAD Dec 16 03:24:00.054380 kernel: kauditd_printk_skb: 62 callbacks suppressed Dec 16 03:24:00.054449 kernel: audit: type=1334 audit(1765855440.052:595): prog-id=193 op=LOAD Dec 16 03:24:00.052000 audit[4739]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4591 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:00.061097 kernel: audit: type=1300 audit(1765855440.052:595): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4591 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:00.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130633763396564623366656164323836323162396664396535303861 Dec 16 03:24:00.066775 kernel: audit: type=1327 audit(1765855440.052:595): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130633763396564623366656164323836323162396664396535303861 Dec 16 03:24:00.070343 kernel: audit: type=1334 audit(1765855440.052:596): prog-id=194 op=LOAD Dec 16 03:24:00.052000 audit: BPF prog-id=194 op=LOAD Dec 16 03:24:00.052000 audit[4739]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4591 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:00.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130633763396564623366656164323836323162396664396535303861 Dec 16 03:24:00.087063 kernel: audit: type=1300 audit(1765855440.052:596): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4591 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:00.087106 kernel: audit: type=1327 audit(1765855440.052:596): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130633763396564623366656164323836323162396664396535303861 Dec 16 03:24:00.091361 kernel: audit: type=1334 audit(1765855440.052:597): prog-id=194 op=UNLOAD Dec 16 03:24:00.052000 audit: BPF prog-id=194 op=UNLOAD Dec 16 03:24:00.098620 kernel: audit: type=1300 audit(1765855440.052:597): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4591 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:00.052000 audit[4739]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4591 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:00.098750 containerd[2508]: time="2025-12-16T03:24:00.095499539Z" level=info msg="StartContainer for \"10c7c9edb3fead28621b9fd9e508acb7f5837704af4d4b67a2234c393206e1c7\" returns successfully" Dec 16 03:24:00.112626 kernel: audit: type=1327 audit(1765855440.052:597): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130633763396564623366656164323836323162396664396535303861 Dec 16 03:24:00.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130633763396564623366656164323836323162396664396535303861 Dec 16 03:24:00.052000 audit: BPF prog-id=193 op=UNLOAD Dec 16 03:24:00.052000 audit[4739]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4591 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:00.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130633763396564623366656164323836323162396664396535303861 Dec 16 03:24:00.052000 audit: BPF prog-id=195 op=LOAD Dec 16 03:24:00.052000 audit[4739]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4591 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:00.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130633763396564623366656164323836323162396664396535303861 Dec 16 03:24:00.116161 kernel: audit: type=1334 audit(1765855440.052:598): prog-id=193 op=UNLOAD Dec 16 03:24:01.219700 containerd[2508]: time="2025-12-16T03:24:01.219649424Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 03:24:01.221712 systemd[1]: cri-containerd-10c7c9edb3fead28621b9fd9e508acb7f5837704af4d4b67a2234c393206e1c7.scope: Deactivated successfully. Dec 16 03:24:01.222025 systemd[1]: cri-containerd-10c7c9edb3fead28621b9fd9e508acb7f5837704af4d4b67a2234c393206e1c7.scope: Consumed 404ms CPU time, 192.1M memory peak, 171.3M written to disk. Dec 16 03:24:01.223999 containerd[2508]: time="2025-12-16T03:24:01.223968220Z" level=info msg="received container exit event container_id:\"10c7c9edb3fead28621b9fd9e508acb7f5837704af4d4b67a2234c393206e1c7\" id:\"10c7c9edb3fead28621b9fd9e508acb7f5837704af4d4b67a2234c393206e1c7\" pid:4753 exited_at:{seconds:1765855441 nanos:223634973}" Dec 16 03:24:01.225000 audit: BPF prog-id=195 op=UNLOAD Dec 16 03:24:01.245759 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10c7c9edb3fead28621b9fd9e508acb7f5837704af4d4b67a2234c393206e1c7-rootfs.mount: Deactivated successfully. Dec 16 03:24:01.265402 kubelet[3995]: I1216 03:24:01.265379 3995 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 03:24:01.350571 systemd[1]: Created slice kubepods-besteffort-pod52f35797_5a94_4b5f_8ac7_147ca2758736.slice - libcontainer container kubepods-besteffort-pod52f35797_5a94_4b5f_8ac7_147ca2758736.slice. Dec 16 03:24:01.355510 containerd[2508]: time="2025-12-16T03:24:01.355473819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-srg9b,Uid:52f35797-5a94-4b5f-8ac7-147ca2758736,Namespace:calico-system,Attempt:0,}" Dec 16 03:24:02.114106 systemd[1]: Created slice kubepods-besteffort-pod17fccc4a_a08c_4495_a01b_bad3cd3eab43.slice - libcontainer container kubepods-besteffort-pod17fccc4a_a08c_4495_a01b_bad3cd3eab43.slice. Dec 16 03:24:02.147537 containerd[2508]: time="2025-12-16T03:24:02.147491602Z" level=error msg="Failed to destroy network for sandbox \"5bed5bc0a85ee17871b5480764508fecc160ce3dfbb80eaba5270579eabffb9f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:24:02.150361 systemd[1]: run-netns-cni\x2d3ed4a356\x2db3c4\x2d73d5\x2da5f4\x2d0b15be1f0040.mount: Deactivated successfully. Dec 16 03:24:02.158743 systemd[1]: Created slice kubepods-besteffort-pode0164474_95e7_4b01_988d_4ae10762d8d3.slice - libcontainer container kubepods-besteffort-pode0164474_95e7_4b01_988d_4ae10762d8d3.slice. Dec 16 03:24:02.159486 containerd[2508]: time="2025-12-16T03:24:02.159257179Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-srg9b,Uid:52f35797-5a94-4b5f-8ac7-147ca2758736,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bed5bc0a85ee17871b5480764508fecc160ce3dfbb80eaba5270579eabffb9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:24:02.160029 kubelet[3995]: E1216 03:24:02.159994 3995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bed5bc0a85ee17871b5480764508fecc160ce3dfbb80eaba5270579eabffb9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:24:02.160168 kubelet[3995]: E1216 03:24:02.160056 3995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bed5bc0a85ee17871b5480764508fecc160ce3dfbb80eaba5270579eabffb9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-srg9b" Dec 16 03:24:02.160168 kubelet[3995]: E1216 03:24:02.160078 3995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bed5bc0a85ee17871b5480764508fecc160ce3dfbb80eaba5270579eabffb9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-srg9b" Dec 16 03:24:02.160168 kubelet[3995]: E1216 03:24:02.160124 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-srg9b_calico-system(52f35797-5a94-4b5f-8ac7-147ca2758736)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-srg9b_calico-system(52f35797-5a94-4b5f-8ac7-147ca2758736)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5bed5bc0a85ee17871b5480764508fecc160ce3dfbb80eaba5270579eabffb9f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-srg9b" podUID="52f35797-5a94-4b5f-8ac7-147ca2758736" Dec 16 03:24:02.163771 kubelet[3995]: I1216 03:24:02.163745 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27fds\" (UniqueName: \"kubernetes.io/projected/17fccc4a-a08c-4495-a01b-bad3cd3eab43-kube-api-access-27fds\") pod \"goldmane-666569f655-284xb\" (UID: \"17fccc4a-a08c-4495-a01b-bad3cd3eab43\") " pod="calico-system/goldmane-666569f655-284xb" Dec 16 03:24:02.163867 kubelet[3995]: I1216 03:24:02.163779 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzdg8\" (UniqueName: \"kubernetes.io/projected/e0164474-95e7-4b01-988d-4ae10762d8d3-kube-api-access-fzdg8\") pod \"calico-kube-controllers-85c7d9d48b-hc6qj\" (UID: \"e0164474-95e7-4b01-988d-4ae10762d8d3\") " pod="calico-system/calico-kube-controllers-85c7d9d48b-hc6qj" Dec 16 03:24:02.163867 kubelet[3995]: I1216 03:24:02.163802 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17fccc4a-a08c-4495-a01b-bad3cd3eab43-config\") pod \"goldmane-666569f655-284xb\" (UID: \"17fccc4a-a08c-4495-a01b-bad3cd3eab43\") " pod="calico-system/goldmane-666569f655-284xb" Dec 16 03:24:02.163867 kubelet[3995]: I1216 03:24:02.163818 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/17fccc4a-a08c-4495-a01b-bad3cd3eab43-goldmane-key-pair\") pod \"goldmane-666569f655-284xb\" (UID: \"17fccc4a-a08c-4495-a01b-bad3cd3eab43\") " pod="calico-system/goldmane-666569f655-284xb" Dec 16 03:24:02.163867 kubelet[3995]: I1216 03:24:02.163838 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/17fccc4a-a08c-4495-a01b-bad3cd3eab43-goldmane-ca-bundle\") pod \"goldmane-666569f655-284xb\" (UID: \"17fccc4a-a08c-4495-a01b-bad3cd3eab43\") " pod="calico-system/goldmane-666569f655-284xb" Dec 16 03:24:02.163867 kubelet[3995]: I1216 03:24:02.163855 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0164474-95e7-4b01-988d-4ae10762d8d3-tigera-ca-bundle\") pod \"calico-kube-controllers-85c7d9d48b-hc6qj\" (UID: \"e0164474-95e7-4b01-988d-4ae10762d8d3\") " pod="calico-system/calico-kube-controllers-85c7d9d48b-hc6qj" Dec 16 03:24:02.170854 systemd[1]: Created slice kubepods-burstable-podb18239fa_27f6_46f2_8f55_8e660ec10a40.slice - libcontainer container kubepods-burstable-podb18239fa_27f6_46f2_8f55_8e660ec10a40.slice. Dec 16 03:24:02.186155 systemd[1]: Created slice kubepods-burstable-podc52e994d_d5bc_47b8_904c_c1132e917f17.slice - libcontainer container kubepods-burstable-podc52e994d_d5bc_47b8_904c_c1132e917f17.slice. Dec 16 03:24:02.193015 systemd[1]: Created slice kubepods-besteffort-pod54b56cb1_52c9_41bc_a663_ff33a6ea04fc.slice - libcontainer container kubepods-besteffort-pod54b56cb1_52c9_41bc_a663_ff33a6ea04fc.slice. Dec 16 03:24:02.197507 systemd[1]: Created slice kubepods-besteffort-pod453a3c95_d107_4f4e_b7f5_ee250655b168.slice - libcontainer container kubepods-besteffort-pod453a3c95_d107_4f4e_b7f5_ee250655b168.slice. Dec 16 03:24:02.203792 systemd[1]: Created slice kubepods-besteffort-pod928c764d_cf1a_4e24_874a_b4bd241b86e5.slice - libcontainer container kubepods-besteffort-pod928c764d_cf1a_4e24_874a_b4bd241b86e5.slice. Dec 16 03:24:02.208077 systemd[1]: Created slice kubepods-besteffort-podb0a716ce_6354_47ff_896b_1da783a25f3a.slice - libcontainer container kubepods-besteffort-podb0a716ce_6354_47ff_896b_1da783a25f3a.slice. Dec 16 03:24:02.264762 kubelet[3995]: I1216 03:24:02.264727 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/928c764d-cf1a-4e24-874a-b4bd241b86e5-calico-apiserver-certs\") pod \"calico-apiserver-86474dbd54-65v57\" (UID: \"928c764d-cf1a-4e24-874a-b4bd241b86e5\") " pod="calico-apiserver/calico-apiserver-86474dbd54-65v57" Dec 16 03:24:02.264762 kubelet[3995]: I1216 03:24:02.264765 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqh2j\" (UniqueName: \"kubernetes.io/projected/928c764d-cf1a-4e24-874a-b4bd241b86e5-kube-api-access-kqh2j\") pod \"calico-apiserver-86474dbd54-65v57\" (UID: \"928c764d-cf1a-4e24-874a-b4bd241b86e5\") " pod="calico-apiserver/calico-apiserver-86474dbd54-65v57" Dec 16 03:24:02.264937 kubelet[3995]: I1216 03:24:02.264782 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pwl9\" (UniqueName: \"kubernetes.io/projected/b0a716ce-6354-47ff-896b-1da783a25f3a-kube-api-access-8pwl9\") pod \"calico-apiserver-86474dbd54-fphkv\" (UID: \"b0a716ce-6354-47ff-896b-1da783a25f3a\") " pod="calico-apiserver/calico-apiserver-86474dbd54-fphkv" Dec 16 03:24:02.264937 kubelet[3995]: I1216 03:24:02.264799 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxtzz\" (UniqueName: \"kubernetes.io/projected/b18239fa-27f6-46f2-8f55-8e660ec10a40-kube-api-access-wxtzz\") pod \"coredns-674b8bbfcf-dvbkk\" (UID: \"b18239fa-27f6-46f2-8f55-8e660ec10a40\") " pod="kube-system/coredns-674b8bbfcf-dvbkk" Dec 16 03:24:02.264937 kubelet[3995]: I1216 03:24:02.264816 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c52e994d-d5bc-47b8-904c-c1132e917f17-config-volume\") pod \"coredns-674b8bbfcf-fqmqs\" (UID: \"c52e994d-d5bc-47b8-904c-c1132e917f17\") " pod="kube-system/coredns-674b8bbfcf-fqmqs" Dec 16 03:24:02.264937 kubelet[3995]: I1216 03:24:02.264835 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kctb\" (UniqueName: \"kubernetes.io/projected/453a3c95-d107-4f4e-b7f5-ee250655b168-kube-api-access-8kctb\") pod \"calico-apiserver-69c4bb98b9-88qzw\" (UID: \"453a3c95-d107-4f4e-b7f5-ee250655b168\") " pod="calico-apiserver/calico-apiserver-69c4bb98b9-88qzw" Dec 16 03:24:02.264937 kubelet[3995]: I1216 03:24:02.264855 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/54b56cb1-52c9-41bc-a663-ff33a6ea04fc-whisker-backend-key-pair\") pod \"whisker-755bbbcf8f-l5f5l\" (UID: \"54b56cb1-52c9-41bc-a663-ff33a6ea04fc\") " pod="calico-system/whisker-755bbbcf8f-l5f5l" Dec 16 03:24:02.265099 kubelet[3995]: I1216 03:24:02.264873 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/453a3c95-d107-4f4e-b7f5-ee250655b168-calico-apiserver-certs\") pod \"calico-apiserver-69c4bb98b9-88qzw\" (UID: \"453a3c95-d107-4f4e-b7f5-ee250655b168\") " pod="calico-apiserver/calico-apiserver-69c4bb98b9-88qzw" Dec 16 03:24:02.265099 kubelet[3995]: I1216 03:24:02.264890 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b0a716ce-6354-47ff-896b-1da783a25f3a-calico-apiserver-certs\") pod \"calico-apiserver-86474dbd54-fphkv\" (UID: \"b0a716ce-6354-47ff-896b-1da783a25f3a\") " pod="calico-apiserver/calico-apiserver-86474dbd54-fphkv" Dec 16 03:24:02.265099 kubelet[3995]: I1216 03:24:02.264910 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbsrx\" (UniqueName: \"kubernetes.io/projected/54b56cb1-52c9-41bc-a663-ff33a6ea04fc-kube-api-access-xbsrx\") pod \"whisker-755bbbcf8f-l5f5l\" (UID: \"54b56cb1-52c9-41bc-a663-ff33a6ea04fc\") " pod="calico-system/whisker-755bbbcf8f-l5f5l" Dec 16 03:24:02.265099 kubelet[3995]: I1216 03:24:02.264954 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54b56cb1-52c9-41bc-a663-ff33a6ea04fc-whisker-ca-bundle\") pod \"whisker-755bbbcf8f-l5f5l\" (UID: \"54b56cb1-52c9-41bc-a663-ff33a6ea04fc\") " pod="calico-system/whisker-755bbbcf8f-l5f5l" Dec 16 03:24:02.265099 kubelet[3995]: I1216 03:24:02.265035 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b18239fa-27f6-46f2-8f55-8e660ec10a40-config-volume\") pod \"coredns-674b8bbfcf-dvbkk\" (UID: \"b18239fa-27f6-46f2-8f55-8e660ec10a40\") " pod="kube-system/coredns-674b8bbfcf-dvbkk" Dec 16 03:24:02.265270 kubelet[3995]: I1216 03:24:02.265054 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh4m5\" (UniqueName: \"kubernetes.io/projected/c52e994d-d5bc-47b8-904c-c1132e917f17-kube-api-access-hh4m5\") pod \"coredns-674b8bbfcf-fqmqs\" (UID: \"c52e994d-d5bc-47b8-904c-c1132e917f17\") " pod="kube-system/coredns-674b8bbfcf-fqmqs" Dec 16 03:24:02.495158 containerd[2508]: time="2025-12-16T03:24:02.495008985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 03:24:03.096183 containerd[2508]: time="2025-12-16T03:24:03.096133796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-755bbbcf8f-l5f5l,Uid:54b56cb1-52c9-41bc-a663-ff33a6ea04fc,Namespace:calico-system,Attempt:0,}" Dec 16 03:24:03.148776 containerd[2508]: time="2025-12-16T03:24:03.148727800Z" level=error msg="Failed to destroy network for sandbox \"e760776c7f2e0e58d5490c5a0e47acac2dba82daf3c04268978c39a1ccc788bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:24:03.153975 containerd[2508]: time="2025-12-16T03:24:03.153825675Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-755bbbcf8f-l5f5l,Uid:54b56cb1-52c9-41bc-a663-ff33a6ea04fc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e760776c7f2e0e58d5490c5a0e47acac2dba82daf3c04268978c39a1ccc788bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:24:03.156274 kubelet[3995]: E1216 03:24:03.154246 3995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e760776c7f2e0e58d5490c5a0e47acac2dba82daf3c04268978c39a1ccc788bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:24:03.156274 kubelet[3995]: E1216 03:24:03.154298 3995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e760776c7f2e0e58d5490c5a0e47acac2dba82daf3c04268978c39a1ccc788bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-755bbbcf8f-l5f5l" Dec 16 03:24:03.156274 kubelet[3995]: E1216 03:24:03.154319 3995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e760776c7f2e0e58d5490c5a0e47acac2dba82daf3c04268978c39a1ccc788bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-755bbbcf8f-l5f5l" Dec 16 03:24:03.156610 kubelet[3995]: E1216 03:24:03.154369 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-755bbbcf8f-l5f5l_calico-system(54b56cb1-52c9-41bc-a663-ff33a6ea04fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-755bbbcf8f-l5f5l_calico-system(54b56cb1-52c9-41bc-a663-ff33a6ea04fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e760776c7f2e0e58d5490c5a0e47acac2dba82daf3c04268978c39a1ccc788bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-755bbbcf8f-l5f5l" podUID="54b56cb1-52c9-41bc-a663-ff33a6ea04fc" Dec 16 03:24:03.319421 containerd[2508]: time="2025-12-16T03:24:03.319361374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-284xb,Uid:17fccc4a-a08c-4495-a01b-bad3cd3eab43,Namespace:calico-system,Attempt:0,}" Dec 16 03:24:03.365541 containerd[2508]: time="2025-12-16T03:24:03.365448476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85c7d9d48b-hc6qj,Uid:e0164474-95e7-4b01-988d-4ae10762d8d3,Namespace:calico-system,Attempt:0,}" Dec 16 03:24:03.382540 containerd[2508]: time="2025-12-16T03:24:03.382480442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dvbkk,Uid:b18239fa-27f6-46f2-8f55-8e660ec10a40,Namespace:kube-system,Attempt:0,}" Dec 16 03:24:03.382950 containerd[2508]: time="2025-12-16T03:24:03.382931433Z" level=error msg="Failed to destroy network for sandbox \"eb05e5c63753f5d4bb99edc8becb528ba0910d402447246ebaf42bb94e73c58e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:24:03.385043 systemd[1]: run-netns-cni\x2dcf52470e\x2dbc69\x2deef0\x2dda22\x2d24d422c7fe0e.mount: Deactivated successfully. Dec 16 03:24:03.400564 containerd[2508]: time="2025-12-16T03:24:03.399797043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fqmqs,Uid:c52e994d-d5bc-47b8-904c-c1132e917f17,Namespace:kube-system,Attempt:0,}" Dec 16 03:24:03.401530 containerd[2508]: time="2025-12-16T03:24:03.401486848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69c4bb98b9-88qzw,Uid:453a3c95-d107-4f4e-b7f5-ee250655b168,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:24:03.406828 containerd[2508]: time="2025-12-16T03:24:03.406794349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86474dbd54-65v57,Uid:928c764d-cf1a-4e24-874a-b4bd241b86e5,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:24:03.410687 containerd[2508]: time="2025-12-16T03:24:03.410605607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86474dbd54-fphkv,Uid:b0a716ce-6354-47ff-896b-1da783a25f3a,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:24:03.413467 containerd[2508]: time="2025-12-16T03:24:03.413427664Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-284xb,Uid:17fccc4a-a08c-4495-a01b-bad3cd3eab43,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb05e5c63753f5d4bb99edc8becb528ba0910d402447246ebaf42bb94e73c58e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:24:03.414126 kubelet[3995]: E1216 03:24:03.413731 3995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb05e5c63753f5d4bb99edc8becb528ba0910d402447246ebaf42bb94e73c58e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:24:03.414126 kubelet[3995]: E1216 03:24:03.413778 3995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb05e5c63753f5d4bb99edc8becb528ba0910d402447246ebaf42bb94e73c58e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-284xb" Dec 16 03:24:03.414126 kubelet[3995]: E1216 03:24:03.413799 3995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb05e5c63753f5d4bb99edc8becb528ba0910d402447246ebaf42bb94e73c58e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-284xb" Dec 16 03:24:03.414315 kubelet[3995]: E1216 03:24:03.413845 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-284xb_calico-system(17fccc4a-a08c-4495-a01b-bad3cd3eab43)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-284xb_calico-system(17fccc4a-a08c-4495-a01b-bad3cd3eab43)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb05e5c63753f5d4bb99edc8becb528ba0910d402447246ebaf42bb94e73c58e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-284xb" podUID="17fccc4a-a08c-4495-a01b-bad3cd3eab43" Dec 16 03:24:03.464395 containerd[2508]: time="2025-12-16T03:24:03.464252537Z" level=error msg="Failed to destroy network for sandbox \"a83a9190f259160e482f273f540119db3279984a3cc074a45a762db7f17b6f53\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:24:03.476987 containerd[2508]: time="2025-12-16T03:24:03.476939968Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85c7d9d48b-hc6qj,Uid:e0164474-95e7-4b01-988d-4ae10762d8d3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a83a9190f259160e482f273f540119db3279984a3cc074a45a762db7f17b6f53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:24:03.477445 kubelet[3995]: E1216 03:24:03.477161 3995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a83a9190f259160e482f273f540119db3279984a3cc074a45a762db7f17b6f53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:24:03.477445 kubelet[3995]: E1216 03:24:03.477223 3995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a83a9190f259160e482f273f540119db3279984a3cc074a45a762db7f17b6f53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85c7d9d48b-hc6qj" Dec 16 03:24:03.477445 kubelet[3995]: E1216 03:24:03.477247 3995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a83a9190f259160e482f273f540119db3279984a3cc074a45a762db7f17b6f53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85c7d9d48b-hc6qj" Dec 16 03:24:03.477572 kubelet[3995]: E1216 03:24:03.477315 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85c7d9d48b-hc6qj_calico-system(e0164474-95e7-4b01-988d-4ae10762d8d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85c7d9d48b-hc6qj_calico-system(e0164474-95e7-4b01-988d-4ae10762d8d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a83a9190f259160e482f273f540119db3279984a3cc074a45a762db7f17b6f53\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85c7d9d48b-hc6qj" podUID="e0164474-95e7-4b01-988d-4ae10762d8d3" Dec 16 03:24:03.546714 containerd[2508]: time="2025-12-16T03:24:03.546609705Z" level=error msg="Failed to destroy network for sandbox \"ca8cee915b55e3ad1738dcf5b79c3c4c99146749bf3bbfad3a31da5648f8fcde\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:24:03.548401 containerd[2508]: time="2025-12-16T03:24:03.548287809Z" level=error msg="Failed to destroy network for sandbox \"7dc428781c485b96bee3ebe7ad83dbd8234e6f7aa6c7f2c2b02237ef145b97a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:24:03.552856 containerd[2508]: time="2025-12-16T03:24:03.552088025Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dvbkk,Uid:b18239fa-27f6-46f2-8f55-8e660ec10a40,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca8cee915b55e3ad1738dcf5b79c3c4c99146749bf3bbfad3a31da5648f8fcde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:24:03.553642 kubelet[3995]: E1216 03:24:03.553601 3995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca8cee915b55e3ad1738dcf5b79c3c4c99146749bf3bbfad3a31da5648f8fcde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:24:03.553860 kubelet[3995]: E1216 03:24:03.553667 3995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca8cee915b55e3ad1738dcf5b79c3c4c99146749bf3bbfad3a31da5648f8fcde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dvbkk" Dec 16 03:24:03.553860 kubelet[3995]: E1216 03:24:03.553701 3995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca8cee915b55e3ad1738dcf5b79c3c4c99146749bf3bbfad3a31da5648f8fcde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dvbkk" Dec 16 03:24:03.553860 kubelet[3995]: E1216 03:24:03.553768 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-dvbkk_kube-system(b18239fa-27f6-46f2-8f55-8e660ec10a40)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-dvbkk_kube-system(b18239fa-27f6-46f2-8f55-8e660ec10a40)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca8cee915b55e3ad1738dcf5b79c3c4c99146749bf3bbfad3a31da5648f8fcde\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dvbkk" podUID="b18239fa-27f6-46f2-8f55-8e660ec10a40" Dec 16 03:24:03.556622 containerd[2508]: time="2025-12-16T03:24:03.556590309Z" level=error msg="Failed to destroy network for sandbox \"54070852c3a0c494701fc5d465d12d6ea3fd0b36db8ee140ac0d1e5e511e5435\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:24:03.563369 containerd[2508]: time="2025-12-16T03:24:03.563297744Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69c4bb98b9-88qzw,Uid:453a3c95-d107-4f4e-b7f5-ee250655b168,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dc428781c485b96bee3ebe7ad83dbd8234e6f7aa6c7f2c2b02237ef145b97a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:24:03.563857 kubelet[3995]: E1216 03:24:03.563673 3995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dc428781c485b96bee3ebe7ad83dbd8234e6f7aa6c7f2c2b02237ef145b97a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:24:03.563857 kubelet[3995]: E1216 03:24:03.563723 3995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dc428781c485b96bee3ebe7ad83dbd8234e6f7aa6c7f2c2b02237ef145b97a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69c4bb98b9-88qzw" Dec 16 03:24:03.563857 kubelet[3995]: E1216 03:24:03.563746 3995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dc428781c485b96bee3ebe7ad83dbd8234e6f7aa6c7f2c2b02237ef145b97a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69c4bb98b9-88qzw" Dec 16 03:24:03.563986 kubelet[3995]: E1216 03:24:03.563793 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69c4bb98b9-88qzw_calico-apiserver(453a3c95-d107-4f4e-b7f5-ee250655b168)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69c4bb98b9-88qzw_calico-apiserver(453a3c95-d107-4f4e-b7f5-ee250655b168)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7dc428781c485b96bee3ebe7ad83dbd8234e6f7aa6c7f2c2b02237ef145b97a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69c4bb98b9-88qzw" podUID="453a3c95-d107-4f4e-b7f5-ee250655b168" Dec 16 03:24:03.579543 containerd[2508]: time="2025-12-16T03:24:03.579206260Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86474dbd54-65v57,Uid:928c764d-cf1a-4e24-874a-b4bd241b86e5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"54070852c3a0c494701fc5d465d12d6ea3fd0b36db8ee140ac0d1e5e511e5435\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:24:03.579646 kubelet[3995]: E1216 03:24:03.579381 3995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54070852c3a0c494701fc5d465d12d6ea3fd0b36db8ee140ac0d1e5e511e5435\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:24:03.579646 kubelet[3995]: E1216 03:24:03.579438 3995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54070852c3a0c494701fc5d465d12d6ea3fd0b36db8ee140ac0d1e5e511e5435\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86474dbd54-65v57" Dec 16 03:24:03.579646 kubelet[3995]: E1216 03:24:03.579456 3995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54070852c3a0c494701fc5d465d12d6ea3fd0b36db8ee140ac0d1e5e511e5435\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86474dbd54-65v57" Dec 16 03:24:03.579740 kubelet[3995]: E1216 03:24:03.579504 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86474dbd54-65v57_calico-apiserver(928c764d-cf1a-4e24-874a-b4bd241b86e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86474dbd54-65v57_calico-apiserver(928c764d-cf1a-4e24-874a-b4bd241b86e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"54070852c3a0c494701fc5d465d12d6ea3fd0b36db8ee140ac0d1e5e511e5435\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86474dbd54-65v57" podUID="928c764d-cf1a-4e24-874a-b4bd241b86e5" Dec 16 03:24:03.586298 containerd[2508]: time="2025-12-16T03:24:03.586220271Z" level=error msg="Failed to destroy network for sandbox \"9bee0568d03e8514385ec98b67f32057d16daa64f0329d23122c4af34f23605a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:24:03.594367 containerd[2508]: time="2025-12-16T03:24:03.593555886Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fqmqs,Uid:c52e994d-d5bc-47b8-904c-c1132e917f17,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bee0568d03e8514385ec98b67f32057d16daa64f0329d23122c4af34f23605a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:24:03.594469 kubelet[3995]: E1216 03:24:03.594194 3995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bee0568d03e8514385ec98b67f32057d16daa64f0329d23122c4af34f23605a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:24:03.594469 kubelet[3995]: E1216 03:24:03.594251 3995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bee0568d03e8514385ec98b67f32057d16daa64f0329d23122c4af34f23605a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fqmqs" Dec 16 03:24:03.594469 kubelet[3995]: E1216 03:24:03.594270 3995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bee0568d03e8514385ec98b67f32057d16daa64f0329d23122c4af34f23605a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fqmqs" Dec 16 03:24:03.594556 kubelet[3995]: E1216 03:24:03.594319 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fqmqs_kube-system(c52e994d-d5bc-47b8-904c-c1132e917f17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fqmqs_kube-system(c52e994d-d5bc-47b8-904c-c1132e917f17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9bee0568d03e8514385ec98b67f32057d16daa64f0329d23122c4af34f23605a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fqmqs" podUID="c52e994d-d5bc-47b8-904c-c1132e917f17" Dec 16 03:24:03.609888 containerd[2508]: time="2025-12-16T03:24:03.609811848Z" level=error msg="Failed to destroy network for sandbox \"34a762811a7fe46facb555812bfbbf9e3aac35cfb29d8e36a0466ab4284c1dca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:24:03.614684 containerd[2508]: time="2025-12-16T03:24:03.614651576Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86474dbd54-fphkv,Uid:b0a716ce-6354-47ff-896b-1da783a25f3a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"34a762811a7fe46facb555812bfbbf9e3aac35cfb29d8e36a0466ab4284c1dca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:24:03.614862 kubelet[3995]: E1216 03:24:03.614803 3995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34a762811a7fe46facb555812bfbbf9e3aac35cfb29d8e36a0466ab4284c1dca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:24:03.614862 kubelet[3995]: E1216 03:24:03.614850 3995 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34a762811a7fe46facb555812bfbbf9e3aac35cfb29d8e36a0466ab4284c1dca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86474dbd54-fphkv" Dec 16 03:24:03.614946 kubelet[3995]: E1216 03:24:03.614868 3995 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34a762811a7fe46facb555812bfbbf9e3aac35cfb29d8e36a0466ab4284c1dca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86474dbd54-fphkv" Dec 16 03:24:03.614946 kubelet[3995]: E1216 03:24:03.614920 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86474dbd54-fphkv_calico-apiserver(b0a716ce-6354-47ff-896b-1da783a25f3a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86474dbd54-fphkv_calico-apiserver(b0a716ce-6354-47ff-896b-1da783a25f3a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"34a762811a7fe46facb555812bfbbf9e3aac35cfb29d8e36a0466ab4284c1dca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86474dbd54-fphkv" podUID="b0a716ce-6354-47ff-896b-1da783a25f3a" Dec 16 03:24:04.271165 systemd[1]: run-netns-cni\x2d2b18194b\x2da72b\x2dcf51\x2db08e\x2d547370eef338.mount: Deactivated successfully. Dec 16 03:24:04.271420 systemd[1]: run-netns-cni\x2d7f7afaca\x2dc153\x2d865d\x2de493\x2d3a4e101b117d.mount: Deactivated successfully. Dec 16 03:24:07.187126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1928912047.mount: Deactivated successfully. Dec 16 03:24:07.209994 containerd[2508]: time="2025-12-16T03:24:07.209947431Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:24:07.212443 containerd[2508]: time="2025-12-16T03:24:07.212312634Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Dec 16 03:24:07.214767 containerd[2508]: time="2025-12-16T03:24:07.214739226Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:24:07.219434 containerd[2508]: time="2025-12-16T03:24:07.219395214Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:24:07.219834 containerd[2508]: time="2025-12-16T03:24:07.219786206Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 4.724723634s" Dec 16 03:24:07.219926 containerd[2508]: time="2025-12-16T03:24:07.219911652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 16 03:24:07.240156 containerd[2508]: time="2025-12-16T03:24:07.240113823Z" level=info msg="CreateContainer within sandbox \"48e321c98100f2ed3c6f9e767e2f524dfc90664b6f34927e1c56a3dae0a508cc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 03:24:07.257879 containerd[2508]: time="2025-12-16T03:24:07.257279828Z" level=info msg="Container 4a669875fd83394a35d80534247d5b1e7e1949236774c639c1b5525e19298b76: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:24:07.261079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2767039214.mount: Deactivated successfully. Dec 16 03:24:07.274562 containerd[2508]: time="2025-12-16T03:24:07.274536087Z" level=info msg="CreateContainer within sandbox \"48e321c98100f2ed3c6f9e767e2f524dfc90664b6f34927e1c56a3dae0a508cc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4a669875fd83394a35d80534247d5b1e7e1949236774c639c1b5525e19298b76\"" Dec 16 03:24:07.274993 containerd[2508]: time="2025-12-16T03:24:07.274970983Z" level=info msg="StartContainer for \"4a669875fd83394a35d80534247d5b1e7e1949236774c639c1b5525e19298b76\"" Dec 16 03:24:07.276118 containerd[2508]: time="2025-12-16T03:24:07.276088774Z" level=info msg="connecting to shim 4a669875fd83394a35d80534247d5b1e7e1949236774c639c1b5525e19298b76" address="unix:///run/containerd/s/8db59c9c5516f10af1d1a8f8d33db2c2403f0afee1edeac1589763d59787899f" protocol=ttrpc version=3 Dec 16 03:24:07.290317 systemd[1]: Started cri-containerd-4a669875fd83394a35d80534247d5b1e7e1949236774c639c1b5525e19298b76.scope - libcontainer container 4a669875fd83394a35d80534247d5b1e7e1949236774c639c1b5525e19298b76. Dec 16 03:24:07.330000 audit: BPF prog-id=196 op=LOAD Dec 16 03:24:07.332695 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 16 03:24:07.332792 kernel: audit: type=1334 audit(1765855447.330:601): prog-id=196 op=LOAD Dec 16 03:24:07.330000 audit[5044]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=4591 pid=5044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:07.340172 kernel: audit: type=1300 audit(1765855447.330:601): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=4591 pid=5044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:07.330000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461363639383735666438333339346133356438303533343234376435 Dec 16 03:24:07.344548 kernel: audit: type=1327 audit(1765855447.330:601): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461363639383735666438333339346133356438303533343234376435 Dec 16 03:24:07.331000 audit: BPF prog-id=197 op=LOAD Dec 16 03:24:07.348154 kernel: audit: type=1334 audit(1765855447.331:602): prog-id=197 op=LOAD Dec 16 03:24:07.331000 audit[5044]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=4591 pid=5044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:07.359092 kernel: audit: type=1300 audit(1765855447.331:602): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=4591 pid=5044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:07.359195 kernel: audit: type=1327 audit(1765855447.331:602): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461363639383735666438333339346133356438303533343234376435 Dec 16 03:24:07.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461363639383735666438333339346133356438303533343234376435 Dec 16 03:24:07.331000 audit: BPF prog-id=197 op=UNLOAD Dec 16 03:24:07.365250 kernel: audit: type=1334 audit(1765855447.331:603): prog-id=197 op=UNLOAD Dec 16 03:24:07.365326 kernel: audit: type=1300 audit(1765855447.331:603): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4591 pid=5044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:07.331000 audit[5044]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4591 pid=5044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:07.370882 kernel: audit: type=1327 audit(1765855447.331:603): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461363639383735666438333339346133356438303533343234376435 Dec 16 03:24:07.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461363639383735666438333339346133356438303533343234376435 Dec 16 03:24:07.372532 kernel: audit: type=1334 audit(1765855447.331:604): prog-id=196 op=UNLOAD Dec 16 03:24:07.331000 audit: BPF prog-id=196 op=UNLOAD Dec 16 03:24:07.331000 audit[5044]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4591 pid=5044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:07.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461363639383735666438333339346133356438303533343234376435 Dec 16 03:24:07.331000 audit: BPF prog-id=198 op=LOAD Dec 16 03:24:07.331000 audit[5044]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=4591 pid=5044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:07.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461363639383735666438333339346133356438303533343234376435 Dec 16 03:24:07.388718 containerd[2508]: time="2025-12-16T03:24:07.388682540Z" level=info msg="StartContainer for \"4a669875fd83394a35d80534247d5b1e7e1949236774c639c1b5525e19298b76\" returns successfully" Dec 16 03:24:07.635667 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 03:24:07.635797 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 03:24:07.962840 kubelet[3995]: I1216 03:24:07.961805 3995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xh77j" podStartSLOduration=2.34635754 podStartE2EDuration="15.961768329s" podCreationTimestamp="2025-12-16 03:23:52 +0000 UTC" firstStartedPulling="2025-12-16 03:23:53.605157871 +0000 UTC m=+43.337500319" lastFinishedPulling="2025-12-16 03:24:07.22056866 +0000 UTC m=+56.952911108" observedRunningTime="2025-12-16 03:24:07.96124458 +0000 UTC m=+57.693587034" watchObservedRunningTime="2025-12-16 03:24:07.961768329 +0000 UTC m=+57.694110779" Dec 16 03:24:08.911064 kubelet[3995]: I1216 03:24:08.911029 3995 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/54b56cb1-52c9-41bc-a663-ff33a6ea04fc-whisker-backend-key-pair\") pod \"54b56cb1-52c9-41bc-a663-ff33a6ea04fc\" (UID: \"54b56cb1-52c9-41bc-a663-ff33a6ea04fc\") " Dec 16 03:24:08.911064 kubelet[3995]: I1216 03:24:08.911069 3995 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54b56cb1-52c9-41bc-a663-ff33a6ea04fc-whisker-ca-bundle\") pod \"54b56cb1-52c9-41bc-a663-ff33a6ea04fc\" (UID: \"54b56cb1-52c9-41bc-a663-ff33a6ea04fc\") " Dec 16 03:24:08.911275 kubelet[3995]: I1216 03:24:08.911101 3995 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbsrx\" (UniqueName: \"kubernetes.io/projected/54b56cb1-52c9-41bc-a663-ff33a6ea04fc-kube-api-access-xbsrx\") pod \"54b56cb1-52c9-41bc-a663-ff33a6ea04fc\" (UID: \"54b56cb1-52c9-41bc-a663-ff33a6ea04fc\") " Dec 16 03:24:08.915554 kubelet[3995]: I1216 03:24:08.915456 3995 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54b56cb1-52c9-41bc-a663-ff33a6ea04fc-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "54b56cb1-52c9-41bc-a663-ff33a6ea04fc" (UID: "54b56cb1-52c9-41bc-a663-ff33a6ea04fc"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 03:24:08.915828 kubelet[3995]: I1216 03:24:08.915796 3995 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54b56cb1-52c9-41bc-a663-ff33a6ea04fc-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "54b56cb1-52c9-41bc-a663-ff33a6ea04fc" (UID: "54b56cb1-52c9-41bc-a663-ff33a6ea04fc"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 03:24:08.915990 systemd[1]: var-lib-kubelet-pods-54b56cb1\x2d52c9\x2d41bc\x2da663\x2dff33a6ea04fc-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 03:24:08.917654 kubelet[3995]: I1216 03:24:08.917617 3995 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54b56cb1-52c9-41bc-a663-ff33a6ea04fc-kube-api-access-xbsrx" (OuterVolumeSpecName: "kube-api-access-xbsrx") pod "54b56cb1-52c9-41bc-a663-ff33a6ea04fc" (UID: "54b56cb1-52c9-41bc-a663-ff33a6ea04fc"). InnerVolumeSpecName "kube-api-access-xbsrx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 03:24:08.919617 systemd[1]: var-lib-kubelet-pods-54b56cb1\x2d52c9\x2d41bc\x2da663\x2dff33a6ea04fc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxbsrx.mount: Deactivated successfully. Dec 16 03:24:09.011834 kubelet[3995]: I1216 03:24:09.011803 3995 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/54b56cb1-52c9-41bc-a663-ff33a6ea04fc-whisker-backend-key-pair\") on node \"ci-4547.0.0-a-dc3ed46bb5\" DevicePath \"\"" Dec 16 03:24:09.011834 kubelet[3995]: I1216 03:24:09.011828 3995 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54b56cb1-52c9-41bc-a663-ff33a6ea04fc-whisker-ca-bundle\") on node \"ci-4547.0.0-a-dc3ed46bb5\" DevicePath \"\"" Dec 16 03:24:09.011834 kubelet[3995]: I1216 03:24:09.011838 3995 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xbsrx\" (UniqueName: \"kubernetes.io/projected/54b56cb1-52c9-41bc-a663-ff33a6ea04fc-kube-api-access-xbsrx\") on node \"ci-4547.0.0-a-dc3ed46bb5\" DevicePath \"\"" Dec 16 03:24:09.517916 systemd[1]: Removed slice kubepods-besteffort-pod54b56cb1_52c9_41bc_a663_ff33a6ea04fc.slice - libcontainer container kubepods-besteffort-pod54b56cb1_52c9_41bc_a663_ff33a6ea04fc.slice. Dec 16 03:24:10.347449 kubelet[3995]: I1216 03:24:10.347414 3995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54b56cb1-52c9-41bc-a663-ff33a6ea04fc" path="/var/lib/kubelet/pods/54b56cb1-52c9-41bc-a663-ff33a6ea04fc/volumes" Dec 16 03:24:10.471983 systemd[1]: Created slice kubepods-besteffort-pod08adb93e_a5f4_4e36_9d73_5c61441c3142.slice - libcontainer container kubepods-besteffort-pod08adb93e_a5f4_4e36_9d73_5c61441c3142.slice. Dec 16 03:24:10.521824 kubelet[3995]: I1216 03:24:10.521771 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/08adb93e-a5f4-4e36-9d73-5c61441c3142-whisker-backend-key-pair\") pod \"whisker-6fd5b56957-fm9l2\" (UID: \"08adb93e-a5f4-4e36-9d73-5c61441c3142\") " pod="calico-system/whisker-6fd5b56957-fm9l2" Dec 16 03:24:10.521824 kubelet[3995]: I1216 03:24:10.521808 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldgsj\" (UniqueName: \"kubernetes.io/projected/08adb93e-a5f4-4e36-9d73-5c61441c3142-kube-api-access-ldgsj\") pod \"whisker-6fd5b56957-fm9l2\" (UID: \"08adb93e-a5f4-4e36-9d73-5c61441c3142\") " pod="calico-system/whisker-6fd5b56957-fm9l2" Dec 16 03:24:10.521824 kubelet[3995]: I1216 03:24:10.521833 3995 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08adb93e-a5f4-4e36-9d73-5c61441c3142-whisker-ca-bundle\") pod \"whisker-6fd5b56957-fm9l2\" (UID: \"08adb93e-a5f4-4e36-9d73-5c61441c3142\") " pod="calico-system/whisker-6fd5b56957-fm9l2" Dec 16 03:24:11.376370 containerd[2508]: time="2025-12-16T03:24:11.376168262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fd5b56957-fm9l2,Uid:08adb93e-a5f4-4e36-9d73-5c61441c3142,Namespace:calico-system,Attempt:0,}" Dec 16 03:24:11.930223 systemd-networkd[2145]: cali201349d9fba: Link UP Dec 16 03:24:11.931540 systemd-networkd[2145]: cali201349d9fba: Gained carrier Dec 16 03:24:12.027000 audit: BPF prog-id=199 op=LOAD Dec 16 03:24:12.027000 audit[5303]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc204f9c60 a2=98 a3=1fffffffffffffff items=0 ppid=5172 pid=5303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.027000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:24:12.027000 audit: BPF prog-id=199 op=UNLOAD Dec 16 03:24:12.027000 audit[5303]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc204f9c30 a3=0 items=0 ppid=5172 pid=5303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.027000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:24:12.027000 audit: BPF prog-id=200 op=LOAD Dec 16 03:24:12.027000 audit[5303]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc204f9b40 a2=94 a3=3 items=0 ppid=5172 pid=5303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.027000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:24:12.027000 audit: BPF prog-id=200 op=UNLOAD Dec 16 03:24:12.027000 audit[5303]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc204f9b40 a2=94 a3=3 items=0 ppid=5172 pid=5303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.027000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:24:12.027000 audit: BPF prog-id=201 op=LOAD Dec 16 03:24:12.027000 audit[5303]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc204f9b80 a2=94 a3=7ffc204f9d60 items=0 ppid=5172 pid=5303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.027000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:24:12.027000 audit: BPF prog-id=201 op=UNLOAD Dec 16 03:24:12.027000 audit[5303]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc204f9b80 a2=94 a3=7ffc204f9d60 items=0 ppid=5172 pid=5303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.027000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:24:12.028000 audit: BPF prog-id=202 op=LOAD Dec 16 03:24:12.028000 audit[5304]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdf0d08d00 a2=98 a3=3 items=0 ppid=5172 pid=5304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.028000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:24:12.028000 audit: BPF prog-id=202 op=UNLOAD Dec 16 03:24:12.028000 audit[5304]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffdf0d08cd0 a3=0 items=0 ppid=5172 pid=5304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.028000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:24:12.029000 audit: BPF prog-id=203 op=LOAD Dec 16 03:24:12.029000 audit[5304]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdf0d08af0 a2=94 a3=54428f items=0 ppid=5172 pid=5304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.029000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:24:12.029000 audit: BPF prog-id=203 op=UNLOAD Dec 16 03:24:12.029000 audit[5304]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffdf0d08af0 a2=94 a3=54428f items=0 ppid=5172 pid=5304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.029000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:24:12.029000 audit: BPF prog-id=204 op=LOAD Dec 16 03:24:12.029000 audit[5304]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdf0d08b20 a2=94 a3=2 items=0 ppid=5172 pid=5304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.029000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:24:12.029000 audit: BPF prog-id=204 op=UNLOAD Dec 16 03:24:12.029000 audit[5304]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffdf0d08b20 a2=0 a3=2 items=0 ppid=5172 pid=5304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.029000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:24:12.151000 audit: BPF prog-id=205 op=LOAD Dec 16 03:24:12.151000 audit[5304]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdf0d089e0 a2=94 a3=1 items=0 ppid=5172 pid=5304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.151000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:24:12.152000 audit: BPF prog-id=205 op=UNLOAD Dec 16 03:24:12.152000 audit[5304]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffdf0d089e0 a2=94 a3=1 items=0 ppid=5172 pid=5304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.152000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:24:12.161000 audit: BPF prog-id=206 op=LOAD Dec 16 03:24:12.161000 audit[5304]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffdf0d089d0 a2=94 a3=4 items=0 ppid=5172 pid=5304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.161000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:24:12.161000 audit: BPF prog-id=206 op=UNLOAD Dec 16 03:24:12.161000 audit[5304]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffdf0d089d0 a2=0 a3=4 items=0 ppid=5172 pid=5304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.161000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:24:12.161000 audit: BPF prog-id=207 op=LOAD Dec 16 03:24:12.161000 audit[5304]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdf0d08830 a2=94 a3=5 items=0 ppid=5172 pid=5304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.161000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:24:12.161000 audit: BPF prog-id=207 op=UNLOAD Dec 16 03:24:12.161000 audit[5304]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffdf0d08830 a2=0 a3=5 items=0 ppid=5172 pid=5304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.161000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:24:12.161000 audit: BPF prog-id=208 op=LOAD Dec 16 03:24:12.161000 audit[5304]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffdf0d08a50 a2=94 a3=6 items=0 ppid=5172 pid=5304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.161000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:24:12.161000 audit: BPF prog-id=208 op=UNLOAD Dec 16 03:24:12.161000 audit[5304]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffdf0d08a50 a2=0 a3=6 items=0 ppid=5172 pid=5304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.161000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:24:12.162000 audit: BPF prog-id=209 op=LOAD Dec 16 03:24:12.162000 audit[5304]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffdf0d08200 a2=94 a3=88 items=0 ppid=5172 pid=5304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.162000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:24:12.162000 audit: BPF prog-id=210 op=LOAD Dec 16 03:24:12.162000 audit[5304]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffdf0d08080 a2=94 a3=2 items=0 ppid=5172 pid=5304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.162000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:24:12.162000 audit: BPF prog-id=210 op=UNLOAD Dec 16 03:24:12.162000 audit[5304]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffdf0d080b0 a2=0 a3=7ffdf0d081b0 items=0 ppid=5172 pid=5304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.162000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:24:12.162000 audit: BPF prog-id=209 op=UNLOAD Dec 16 03:24:12.162000 audit[5304]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=fd54d10 a2=0 a3=d1bf4a4f73329750 items=0 ppid=5172 pid=5304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.162000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:24:12.173115 containerd[2508]: 2025-12-16 03:24:11.426 [INFO][5224] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 03:24:12.173115 containerd[2508]: 2025-12-16 03:24:11.667 [INFO][5224] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--dc3ed46bb5-k8s-whisker--6fd5b56957--fm9l2-eth0 whisker-6fd5b56957- calico-system 08adb93e-a5f4-4e36-9d73-5c61441c3142 983 0 2025-12-16 03:24:10 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6fd5b56957 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4547.0.0-a-dc3ed46bb5 whisker-6fd5b56957-fm9l2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali201349d9fba [] [] }} ContainerID="d833be907e513aaf7d9b085a6a8e5b54e361a3db2dcc1498d13a84506a2848ca" Namespace="calico-system" Pod="whisker-6fd5b56957-fm9l2" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-whisker--6fd5b56957--fm9l2-" Dec 16 03:24:12.173115 containerd[2508]: 2025-12-16 03:24:11.667 [INFO][5224] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d833be907e513aaf7d9b085a6a8e5b54e361a3db2dcc1498d13a84506a2848ca" Namespace="calico-system" Pod="whisker-6fd5b56957-fm9l2" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-whisker--6fd5b56957--fm9l2-eth0" Dec 16 03:24:12.173115 containerd[2508]: 2025-12-16 03:24:11.712 [INFO][5263] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d833be907e513aaf7d9b085a6a8e5b54e361a3db2dcc1498d13a84506a2848ca" HandleID="k8s-pod-network.d833be907e513aaf7d9b085a6a8e5b54e361a3db2dcc1498d13a84506a2848ca" Workload="ci--4547.0.0--a--dc3ed46bb5-k8s-whisker--6fd5b56957--fm9l2-eth0" Dec 16 03:24:12.173658 containerd[2508]: 2025-12-16 03:24:11.713 [INFO][5263] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d833be907e513aaf7d9b085a6a8e5b54e361a3db2dcc1498d13a84506a2848ca" HandleID="k8s-pod-network.d833be907e513aaf7d9b085a6a8e5b54e361a3db2dcc1498d13a84506a2848ca" Workload="ci--4547.0.0--a--dc3ed46bb5-k8s-whisker--6fd5b56957--fm9l2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad3a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-a-dc3ed46bb5", "pod":"whisker-6fd5b56957-fm9l2", "timestamp":"2025-12-16 03:24:11.71285994 +0000 UTC"}, Hostname:"ci-4547.0.0-a-dc3ed46bb5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:24:12.173658 containerd[2508]: 2025-12-16 03:24:11.714 [INFO][5263] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:24:12.173658 containerd[2508]: 2025-12-16 03:24:11.714 [INFO][5263] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:24:12.173658 containerd[2508]: 2025-12-16 03:24:11.714 [INFO][5263] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-dc3ed46bb5' Dec 16 03:24:12.173658 containerd[2508]: 2025-12-16 03:24:11.722 [INFO][5263] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d833be907e513aaf7d9b085a6a8e5b54e361a3db2dcc1498d13a84506a2848ca" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:12.173658 containerd[2508]: 2025-12-16 03:24:11.726 [INFO][5263] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:12.173658 containerd[2508]: 2025-12-16 03:24:11.730 [INFO][5263] ipam/ipam.go 511: Trying affinity for 192.168.98.128/26 host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:12.173658 containerd[2508]: 2025-12-16 03:24:11.732 [INFO][5263] ipam/ipam.go 158: Attempting to load block cidr=192.168.98.128/26 host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:12.173658 containerd[2508]: 2025-12-16 03:24:11.734 [INFO][5263] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.98.128/26 host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:12.173895 containerd[2508]: 2025-12-16 03:24:11.734 [INFO][5263] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.98.128/26 handle="k8s-pod-network.d833be907e513aaf7d9b085a6a8e5b54e361a3db2dcc1498d13a84506a2848ca" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:12.173895 containerd[2508]: 2025-12-16 03:24:11.736 [INFO][5263] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d833be907e513aaf7d9b085a6a8e5b54e361a3db2dcc1498d13a84506a2848ca Dec 16 03:24:12.173895 containerd[2508]: 2025-12-16 03:24:11.741 [INFO][5263] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.98.128/26 handle="k8s-pod-network.d833be907e513aaf7d9b085a6a8e5b54e361a3db2dcc1498d13a84506a2848ca" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:12.173895 containerd[2508]: 2025-12-16 03:24:11.899 [INFO][5263] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.98.129/26] block=192.168.98.128/26 handle="k8s-pod-network.d833be907e513aaf7d9b085a6a8e5b54e361a3db2dcc1498d13a84506a2848ca" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:12.173895 containerd[2508]: 2025-12-16 03:24:11.899 [INFO][5263] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.98.129/26] handle="k8s-pod-network.d833be907e513aaf7d9b085a6a8e5b54e361a3db2dcc1498d13a84506a2848ca" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:12.173895 containerd[2508]: 2025-12-16 03:24:11.899 [INFO][5263] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:24:12.173895 containerd[2508]: 2025-12-16 03:24:11.899 [INFO][5263] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.98.129/26] IPv6=[] ContainerID="d833be907e513aaf7d9b085a6a8e5b54e361a3db2dcc1498d13a84506a2848ca" HandleID="k8s-pod-network.d833be907e513aaf7d9b085a6a8e5b54e361a3db2dcc1498d13a84506a2848ca" Workload="ci--4547.0.0--a--dc3ed46bb5-k8s-whisker--6fd5b56957--fm9l2-eth0" Dec 16 03:24:12.174630 containerd[2508]: 2025-12-16 03:24:11.903 [INFO][5224] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d833be907e513aaf7d9b085a6a8e5b54e361a3db2dcc1498d13a84506a2848ca" Namespace="calico-system" Pod="whisker-6fd5b56957-fm9l2" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-whisker--6fd5b56957--fm9l2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--dc3ed46bb5-k8s-whisker--6fd5b56957--fm9l2-eth0", GenerateName:"whisker-6fd5b56957-", Namespace:"calico-system", SelfLink:"", UID:"08adb93e-a5f4-4e36-9d73-5c61441c3142", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 24, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6fd5b56957", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-dc3ed46bb5", ContainerID:"", Pod:"whisker-6fd5b56957-fm9l2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.98.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali201349d9fba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:24:12.174630 containerd[2508]: 2025-12-16 03:24:11.904 [INFO][5224] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.129/32] ContainerID="d833be907e513aaf7d9b085a6a8e5b54e361a3db2dcc1498d13a84506a2848ca" Namespace="calico-system" Pod="whisker-6fd5b56957-fm9l2" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-whisker--6fd5b56957--fm9l2-eth0" Dec 16 03:24:12.174732 containerd[2508]: 2025-12-16 03:24:11.904 [INFO][5224] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali201349d9fba ContainerID="d833be907e513aaf7d9b085a6a8e5b54e361a3db2dcc1498d13a84506a2848ca" Namespace="calico-system" Pod="whisker-6fd5b56957-fm9l2" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-whisker--6fd5b56957--fm9l2-eth0" Dec 16 03:24:12.174732 containerd[2508]: 2025-12-16 03:24:11.932 [INFO][5224] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d833be907e513aaf7d9b085a6a8e5b54e361a3db2dcc1498d13a84506a2848ca" Namespace="calico-system" Pod="whisker-6fd5b56957-fm9l2" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-whisker--6fd5b56957--fm9l2-eth0" Dec 16 03:24:12.174779 containerd[2508]: 2025-12-16 03:24:11.938 [INFO][5224] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d833be907e513aaf7d9b085a6a8e5b54e361a3db2dcc1498d13a84506a2848ca" Namespace="calico-system" Pod="whisker-6fd5b56957-fm9l2" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-whisker--6fd5b56957--fm9l2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--dc3ed46bb5-k8s-whisker--6fd5b56957--fm9l2-eth0", GenerateName:"whisker-6fd5b56957-", Namespace:"calico-system", SelfLink:"", UID:"08adb93e-a5f4-4e36-9d73-5c61441c3142", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 24, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6fd5b56957", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-dc3ed46bb5", ContainerID:"d833be907e513aaf7d9b085a6a8e5b54e361a3db2dcc1498d13a84506a2848ca", Pod:"whisker-6fd5b56957-fm9l2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.98.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali201349d9fba", MAC:"de:50:b2:e8:d5:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:24:12.174836 containerd[2508]: 2025-12-16 03:24:12.169 [INFO][5224] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d833be907e513aaf7d9b085a6a8e5b54e361a3db2dcc1498d13a84506a2848ca" Namespace="calico-system" Pod="whisker-6fd5b56957-fm9l2" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-whisker--6fd5b56957--fm9l2-eth0" Dec 16 03:24:12.183000 audit: BPF prog-id=211 op=LOAD Dec 16 03:24:12.183000 audit[5311]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffca8932c80 a2=98 a3=1999999999999999 items=0 ppid=5172 pid=5311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.183000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:24:12.184000 audit: BPF prog-id=211 op=UNLOAD Dec 16 03:24:12.184000 audit[5311]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffca8932c50 a3=0 items=0 ppid=5172 pid=5311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.184000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:24:12.184000 audit: BPF prog-id=212 op=LOAD Dec 16 03:24:12.184000 audit[5311]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffca8932b60 a2=94 a3=ffff items=0 ppid=5172 pid=5311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.184000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:24:12.184000 audit: BPF prog-id=212 op=UNLOAD Dec 16 03:24:12.184000 audit[5311]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffca8932b60 a2=94 a3=ffff items=0 ppid=5172 pid=5311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.184000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:24:12.184000 audit: BPF prog-id=213 op=LOAD Dec 16 03:24:12.184000 audit[5311]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffca8932ba0 a2=94 a3=7ffca8932d80 items=0 ppid=5172 pid=5311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.184000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:24:12.184000 audit: BPF prog-id=213 op=UNLOAD Dec 16 03:24:12.184000 audit[5311]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffca8932ba0 a2=94 a3=7ffca8932d80 items=0 ppid=5172 pid=5311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.184000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:24:12.216544 containerd[2508]: time="2025-12-16T03:24:12.216498205Z" level=info msg="connecting to shim d833be907e513aaf7d9b085a6a8e5b54e361a3db2dcc1498d13a84506a2848ca" address="unix:///run/containerd/s/1426ad1cc7210228e6605edda6512a6e6cb9e45343963ed700cb5feba3bb09a2" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:24:12.241582 systemd[1]: Started cri-containerd-d833be907e513aaf7d9b085a6a8e5b54e361a3db2dcc1498d13a84506a2848ca.scope - libcontainer container d833be907e513aaf7d9b085a6a8e5b54e361a3db2dcc1498d13a84506a2848ca. Dec 16 03:24:12.254000 audit: BPF prog-id=214 op=LOAD Dec 16 03:24:12.254000 audit: BPF prog-id=215 op=LOAD Dec 16 03:24:12.254000 audit[5342]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5330 pid=5342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438333362653930376535313361616637643962303835613661386535 Dec 16 03:24:12.254000 audit: BPF prog-id=215 op=UNLOAD Dec 16 03:24:12.254000 audit[5342]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5330 pid=5342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438333362653930376535313361616637643962303835613661386535 Dec 16 03:24:12.254000 audit: BPF prog-id=216 op=LOAD Dec 16 03:24:12.254000 audit[5342]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5330 pid=5342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438333362653930376535313361616637643962303835613661386535 Dec 16 03:24:12.254000 audit: BPF prog-id=217 op=LOAD Dec 16 03:24:12.254000 audit[5342]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5330 pid=5342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438333362653930376535313361616637643962303835613661386535 Dec 16 03:24:12.254000 audit: BPF prog-id=217 op=UNLOAD Dec 16 03:24:12.254000 audit[5342]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5330 pid=5342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438333362653930376535313361616637643962303835613661386535 Dec 16 03:24:12.254000 audit: BPF prog-id=216 op=UNLOAD Dec 16 03:24:12.254000 audit[5342]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5330 pid=5342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438333362653930376535313361616637643962303835613661386535 Dec 16 03:24:12.254000 audit: BPF prog-id=218 op=LOAD Dec 16 03:24:12.254000 audit[5342]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5330 pid=5342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438333362653930376535313361616637643962303835613661386535 Dec 16 03:24:12.310220 containerd[2508]: time="2025-12-16T03:24:12.310186618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fd5b56957-fm9l2,Uid:08adb93e-a5f4-4e36-9d73-5c61441c3142,Namespace:calico-system,Attempt:0,} returns sandbox id \"d833be907e513aaf7d9b085a6a8e5b54e361a3db2dcc1498d13a84506a2848ca\"" Dec 16 03:24:12.313044 containerd[2508]: time="2025-12-16T03:24:12.313014756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 03:24:12.313300 systemd-networkd[2145]: vxlan.calico: Link UP Dec 16 03:24:12.313304 systemd-networkd[2145]: vxlan.calico: Gained carrier Dec 16 03:24:12.337804 kernel: kauditd_printk_skb: 117 callbacks suppressed Dec 16 03:24:12.337904 kernel: audit: type=1334 audit(1765855452.334:644): prog-id=219 op=LOAD Dec 16 03:24:12.334000 audit: BPF prog-id=219 op=LOAD Dec 16 03:24:12.347517 kernel: audit: type=1300 audit(1765855452.334:644): arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffe2ead090 a2=98 a3=0 items=0 ppid=5172 pid=5380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.334000 audit[5380]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffe2ead090 a2=98 a3=0 items=0 ppid=5172 pid=5380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.354016 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Dec 16 03:24:12.354081 kernel: audit: audit_lost=1 audit_rate_limit=0 audit_backlog_limit=64 Dec 16 03:24:12.354100 kernel: audit: backlog limit exceeded Dec 16 03:24:12.363042 kernel: audit: type=1327 audit(1765855452.334:644): proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:24:12.334000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:24:12.372321 kernel: audit: type=1334 audit(1765855452.334:645): prog-id=219 op=UNLOAD Dec 16 03:24:12.372378 kernel: audit: type=1300 audit(1765855452.334:645): arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fffe2ead060 a3=0 items=0 ppid=5172 pid=5380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.334000 audit: BPF prog-id=219 op=UNLOAD Dec 16 03:24:12.334000 audit[5380]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fffe2ead060 a3=0 items=0 ppid=5172 pid=5380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.377709 kernel: audit: type=1327 audit(1765855452.334:645): proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:24:12.334000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:24:12.334000 audit: BPF prog-id=220 op=LOAD Dec 16 03:24:12.378775 kernel: audit: type=1334 audit(1765855452.334:646): prog-id=220 op=LOAD Dec 16 03:24:12.334000 audit[5380]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffe2eacea0 a2=94 a3=54428f items=0 ppid=5172 pid=5380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.334000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:24:12.334000 audit: BPF prog-id=220 op=UNLOAD Dec 16 03:24:12.334000 audit[5380]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffe2eacea0 a2=94 a3=54428f items=0 ppid=5172 pid=5380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.334000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:24:12.334000 audit: BPF prog-id=221 op=LOAD Dec 16 03:24:12.334000 audit[5380]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffe2eaced0 a2=94 a3=2 items=0 ppid=5172 pid=5380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.334000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:24:12.334000 audit: BPF prog-id=221 op=UNLOAD Dec 16 03:24:12.334000 audit[5380]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffe2eaced0 a2=0 a3=2 items=0 ppid=5172 pid=5380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.334000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:24:12.334000 audit: BPF prog-id=222 op=LOAD Dec 16 03:24:12.334000 audit[5380]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffe2eacc80 a2=94 a3=4 items=0 ppid=5172 pid=5380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.334000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:24:12.334000 audit: BPF prog-id=222 op=UNLOAD Dec 16 03:24:12.334000 audit[5380]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fffe2eacc80 a2=94 a3=4 items=0 ppid=5172 pid=5380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.334000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:24:12.335000 audit: BPF prog-id=223 op=LOAD Dec 16 03:24:12.335000 audit[5380]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffe2eacd80 a2=94 a3=7fffe2eacf00 items=0 ppid=5172 pid=5380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.335000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:24:12.335000 audit: BPF prog-id=223 op=UNLOAD Dec 16 03:24:12.335000 audit[5380]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fffe2eacd80 a2=0 a3=7fffe2eacf00 items=0 ppid=5172 pid=5380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.335000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:24:12.335000 audit: BPF prog-id=224 op=LOAD Dec 16 03:24:12.335000 audit[5380]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffe2eac4b0 a2=94 a3=2 items=0 ppid=5172 pid=5380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.335000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:24:12.335000 audit: BPF prog-id=224 op=UNLOAD Dec 16 03:24:12.335000 audit[5380]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fffe2eac4b0 a2=0 a3=2 items=0 ppid=5172 pid=5380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.335000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:24:12.335000 audit: BPF prog-id=225 op=LOAD Dec 16 03:24:12.335000 audit[5380]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffe2eac5b0 a2=94 a3=30 items=0 ppid=5172 pid=5380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.335000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:24:12.349000 audit: BPF prog-id=226 op=LOAD Dec 16 03:24:12.349000 audit[5385]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffea6dd67a0 a2=98 a3=0 items=0 ppid=5172 pid=5385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.349000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:24:12.349000 audit: BPF prog-id=226 op=UNLOAD Dec 16 03:24:12.349000 audit[5385]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffea6dd6770 a3=0 items=0 ppid=5172 pid=5385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.349000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:24:12.350000 audit: BPF prog-id=227 op=LOAD Dec 16 03:24:12.350000 audit[5385]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffea6dd6590 a2=94 a3=54428f items=0 ppid=5172 pid=5385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.350000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:24:12.350000 audit: BPF prog-id=227 op=UNLOAD Dec 16 03:24:12.350000 audit[5385]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffea6dd6590 a2=94 a3=54428f items=0 ppid=5172 pid=5385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.350000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:24:12.363000 audit: BPF prog-id=228 op=UNLOAD Dec 16 03:24:12.363000 audit[5385]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffea6dd65c0 a2=0 a3=2 items=0 ppid=5172 pid=5385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.363000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:24:12.508000 audit: BPF prog-id=229 op=LOAD Dec 16 03:24:12.508000 audit[5385]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffea6dd6480 a2=94 a3=1 items=0 ppid=5172 pid=5385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.508000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:24:12.508000 audit: BPF prog-id=229 op=UNLOAD Dec 16 03:24:12.508000 audit[5385]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffea6dd6480 a2=94 a3=1 items=0 ppid=5172 pid=5385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.508000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:24:12.518000 audit: BPF prog-id=230 op=LOAD Dec 16 03:24:12.518000 audit[5385]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffea6dd6470 a2=94 a3=4 items=0 ppid=5172 pid=5385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.518000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:24:12.518000 audit: BPF prog-id=230 op=UNLOAD Dec 16 03:24:12.518000 audit[5385]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffea6dd6470 a2=0 a3=4 items=0 ppid=5172 pid=5385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.518000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:24:12.518000 audit: BPF prog-id=231 op=LOAD Dec 16 03:24:12.518000 audit[5385]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffea6dd62d0 a2=94 a3=5 items=0 ppid=5172 pid=5385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.518000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:24:12.518000 audit: BPF prog-id=231 op=UNLOAD Dec 16 03:24:12.518000 audit[5385]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffea6dd62d0 a2=0 a3=5 items=0 ppid=5172 pid=5385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.518000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:24:12.518000 audit: BPF prog-id=232 op=LOAD Dec 16 03:24:12.518000 audit[5385]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffea6dd64f0 a2=94 a3=6 items=0 ppid=5172 pid=5385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.518000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:24:12.518000 audit: BPF prog-id=232 op=UNLOAD Dec 16 03:24:12.518000 audit[5385]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffea6dd64f0 a2=0 a3=6 items=0 ppid=5172 pid=5385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.518000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:24:12.519000 audit: BPF prog-id=233 op=LOAD Dec 16 03:24:12.519000 audit[5385]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffea6dd5ca0 a2=94 a3=88 items=0 ppid=5172 pid=5385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.519000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:24:12.519000 audit: BPF prog-id=234 op=LOAD Dec 16 03:24:12.519000 audit[5385]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffea6dd5b20 a2=94 a3=2 items=0 ppid=5172 pid=5385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.519000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:24:12.519000 audit: BPF prog-id=234 op=UNLOAD Dec 16 03:24:12.519000 audit[5385]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffea6dd5b50 a2=0 a3=7ffea6dd5c50 items=0 ppid=5172 pid=5385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.519000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:24:12.519000 audit: BPF prog-id=233 op=UNLOAD Dec 16 03:24:12.519000 audit[5385]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=27195d10 a2=0 a3=57e812a7308f7658 items=0 ppid=5172 pid=5385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.519000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:24:12.525000 audit: BPF prog-id=225 op=UNLOAD Dec 16 03:24:12.525000 audit[5172]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000e93140 a2=0 a3=0 items=0 ppid=5163 pid=5172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.525000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 16 03:24:12.568293 containerd[2508]: time="2025-12-16T03:24:12.568251478Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:24:12.586405 containerd[2508]: time="2025-12-16T03:24:12.586251386Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 03:24:12.586405 containerd[2508]: time="2025-12-16T03:24:12.586375091Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 03:24:12.587974 kubelet[3995]: E1216 03:24:12.586680 3995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:24:12.587974 kubelet[3995]: E1216 03:24:12.586737 3995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:24:12.588316 kubelet[3995]: E1216 03:24:12.586899 3995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e99e60b66e264e9fbdb6300d985b5bad,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ldgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6fd5b56957-fm9l2_calico-system(08adb93e-a5f4-4e36-9d73-5c61441c3142): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 03:24:12.589089 containerd[2508]: time="2025-12-16T03:24:12.589060348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 03:24:12.615000 audit[5412]: NETFILTER_CFG table=nat:126 family=2 entries=15 op=nft_register_chain pid=5412 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:24:12.615000 audit[5412]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffdefc20a20 a2=0 a3=7ffdefc20a0c items=0 ppid=5172 pid=5412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.615000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:24:12.617000 audit[5413]: NETFILTER_CFG table=mangle:127 family=2 entries=16 op=nft_register_chain pid=5413 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:24:12.617000 audit[5413]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffe3dad4280 a2=0 a3=7ffe3dad426c items=0 ppid=5172 pid=5413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.617000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:24:12.639000 audit[5411]: NETFILTER_CFG table=raw:128 family=2 entries=21 op=nft_register_chain pid=5411 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:24:12.639000 audit[5411]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffdc4b76340 a2=0 a3=7ffdc4b7632c items=0 ppid=5172 pid=5411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.639000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:24:12.644000 audit[5415]: NETFILTER_CFG table=filter:129 family=2 entries=94 op=nft_register_chain pid=5415 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:24:12.644000 audit[5415]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffeff32f470 a2=0 a3=55f554a87000 items=0 ppid=5172 pid=5415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:12.644000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:24:12.840967 containerd[2508]: time="2025-12-16T03:24:12.840894193Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:24:12.843505 containerd[2508]: time="2025-12-16T03:24:12.843470790Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 03:24:12.843610 containerd[2508]: time="2025-12-16T03:24:12.843488731Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 03:24:12.843712 kubelet[3995]: E1216 03:24:12.843681 3995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:24:12.843786 kubelet[3995]: E1216 03:24:12.843727 3995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:24:12.843913 kubelet[3995]: E1216 03:24:12.843866 3995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ldgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6fd5b56957-fm9l2_calico-system(08adb93e-a5f4-4e36-9d73-5c61441c3142): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 03:24:12.845352 kubelet[3995]: E1216 03:24:12.845306 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd5b56957-fm9l2" podUID="08adb93e-a5f4-4e36-9d73-5c61441c3142" Dec 16 03:24:13.369344 systemd-networkd[2145]: vxlan.calico: Gained IPv6LL Dec 16 03:24:13.524442 kubelet[3995]: E1216 03:24:13.524397 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd5b56957-fm9l2" podUID="08adb93e-a5f4-4e36-9d73-5c61441c3142" Dec 16 03:24:13.561345 systemd-networkd[2145]: cali201349d9fba: Gained IPv6LL Dec 16 03:24:13.861000 audit[5425]: NETFILTER_CFG table=filter:130 family=2 entries=20 op=nft_register_rule pid=5425 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:24:13.861000 audit[5425]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc54819970 a2=0 a3=7ffc5481995c items=0 ppid=4100 pid=5425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:13.861000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:24:13.865000 audit[5425]: NETFILTER_CFG table=nat:131 family=2 entries=14 op=nft_register_rule pid=5425 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:24:13.865000 audit[5425]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc54819970 a2=0 a3=0 items=0 ppid=4100 pid=5425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:13.865000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:24:14.347010 containerd[2508]: time="2025-12-16T03:24:14.346944660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-srg9b,Uid:52f35797-5a94-4b5f-8ac7-147ca2758736,Namespace:calico-system,Attempt:0,}" Dec 16 03:24:14.647195 systemd-networkd[2145]: cali4dbc4288518: Link UP Dec 16 03:24:14.647414 systemd-networkd[2145]: cali4dbc4288518: Gained carrier Dec 16 03:24:14.813421 containerd[2508]: 2025-12-16 03:24:14.565 [INFO][5427] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--dc3ed46bb5-k8s-csi--node--driver--srg9b-eth0 csi-node-driver- calico-system 52f35797-5a94-4b5f-8ac7-147ca2758736 815 0 2025-12-16 03:23:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4547.0.0-a-dc3ed46bb5 csi-node-driver-srg9b eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4dbc4288518 [] [] }} ContainerID="29f898b93fbc1db68eda75bc410779360920c8708ed1a3d6c0d589dc0956d82d" Namespace="calico-system" Pod="csi-node-driver-srg9b" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-csi--node--driver--srg9b-" Dec 16 03:24:14.813421 containerd[2508]: 2025-12-16 03:24:14.565 [INFO][5427] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="29f898b93fbc1db68eda75bc410779360920c8708ed1a3d6c0d589dc0956d82d" Namespace="calico-system" Pod="csi-node-driver-srg9b" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-csi--node--driver--srg9b-eth0" Dec 16 03:24:14.813421 containerd[2508]: 2025-12-16 03:24:14.606 [INFO][5438] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="29f898b93fbc1db68eda75bc410779360920c8708ed1a3d6c0d589dc0956d82d" HandleID="k8s-pod-network.29f898b93fbc1db68eda75bc410779360920c8708ed1a3d6c0d589dc0956d82d" Workload="ci--4547.0.0--a--dc3ed46bb5-k8s-csi--node--driver--srg9b-eth0" Dec 16 03:24:14.813691 containerd[2508]: 2025-12-16 03:24:14.606 [INFO][5438] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="29f898b93fbc1db68eda75bc410779360920c8708ed1a3d6c0d589dc0956d82d" HandleID="k8s-pod-network.29f898b93fbc1db68eda75bc410779360920c8708ed1a3d6c0d589dc0956d82d" Workload="ci--4547.0.0--a--dc3ed46bb5-k8s-csi--node--driver--srg9b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5070), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-a-dc3ed46bb5", "pod":"csi-node-driver-srg9b", "timestamp":"2025-12-16 03:24:14.606244184 +0000 UTC"}, Hostname:"ci-4547.0.0-a-dc3ed46bb5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:24:14.813691 containerd[2508]: 2025-12-16 03:24:14.606 [INFO][5438] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:24:14.813691 containerd[2508]: 2025-12-16 03:24:14.606 [INFO][5438] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:24:14.813691 containerd[2508]: 2025-12-16 03:24:14.606 [INFO][5438] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-dc3ed46bb5' Dec 16 03:24:14.813691 containerd[2508]: 2025-12-16 03:24:14.612 [INFO][5438] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.29f898b93fbc1db68eda75bc410779360920c8708ed1a3d6c0d589dc0956d82d" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:14.813691 containerd[2508]: 2025-12-16 03:24:14.617 [INFO][5438] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:14.813691 containerd[2508]: 2025-12-16 03:24:14.621 [INFO][5438] ipam/ipam.go 511: Trying affinity for 192.168.98.128/26 host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:14.813691 containerd[2508]: 2025-12-16 03:24:14.624 [INFO][5438] ipam/ipam.go 158: Attempting to load block cidr=192.168.98.128/26 host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:14.813691 containerd[2508]: 2025-12-16 03:24:14.626 [INFO][5438] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.98.128/26 host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:14.813910 containerd[2508]: 2025-12-16 03:24:14.626 [INFO][5438] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.98.128/26 handle="k8s-pod-network.29f898b93fbc1db68eda75bc410779360920c8708ed1a3d6c0d589dc0956d82d" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:14.813910 containerd[2508]: 2025-12-16 03:24:14.628 [INFO][5438] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.29f898b93fbc1db68eda75bc410779360920c8708ed1a3d6c0d589dc0956d82d Dec 16 03:24:14.813910 containerd[2508]: 2025-12-16 03:24:14.632 [INFO][5438] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.98.128/26 handle="k8s-pod-network.29f898b93fbc1db68eda75bc410779360920c8708ed1a3d6c0d589dc0956d82d" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:14.813910 containerd[2508]: 2025-12-16 03:24:14.641 [INFO][5438] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.98.130/26] block=192.168.98.128/26 handle="k8s-pod-network.29f898b93fbc1db68eda75bc410779360920c8708ed1a3d6c0d589dc0956d82d" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:14.813910 containerd[2508]: 2025-12-16 03:24:14.641 [INFO][5438] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.98.130/26] handle="k8s-pod-network.29f898b93fbc1db68eda75bc410779360920c8708ed1a3d6c0d589dc0956d82d" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:14.813910 containerd[2508]: 2025-12-16 03:24:14.641 [INFO][5438] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:24:14.813910 containerd[2508]: 2025-12-16 03:24:14.641 [INFO][5438] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.98.130/26] IPv6=[] ContainerID="29f898b93fbc1db68eda75bc410779360920c8708ed1a3d6c0d589dc0956d82d" HandleID="k8s-pod-network.29f898b93fbc1db68eda75bc410779360920c8708ed1a3d6c0d589dc0956d82d" Workload="ci--4547.0.0--a--dc3ed46bb5-k8s-csi--node--driver--srg9b-eth0" Dec 16 03:24:14.814541 containerd[2508]: 2025-12-16 03:24:14.643 [INFO][5427] cni-plugin/k8s.go 418: Populated endpoint ContainerID="29f898b93fbc1db68eda75bc410779360920c8708ed1a3d6c0d589dc0956d82d" Namespace="calico-system" Pod="csi-node-driver-srg9b" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-csi--node--driver--srg9b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--dc3ed46bb5-k8s-csi--node--driver--srg9b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"52f35797-5a94-4b5f-8ac7-147ca2758736", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 23, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-dc3ed46bb5", ContainerID:"", Pod:"csi-node-driver-srg9b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.98.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4dbc4288518", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:24:14.814636 containerd[2508]: 2025-12-16 03:24:14.644 [INFO][5427] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.130/32] ContainerID="29f898b93fbc1db68eda75bc410779360920c8708ed1a3d6c0d589dc0956d82d" Namespace="calico-system" Pod="csi-node-driver-srg9b" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-csi--node--driver--srg9b-eth0" Dec 16 03:24:14.814636 containerd[2508]: 2025-12-16 03:24:14.644 [INFO][5427] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4dbc4288518 ContainerID="29f898b93fbc1db68eda75bc410779360920c8708ed1a3d6c0d589dc0956d82d" Namespace="calico-system" Pod="csi-node-driver-srg9b" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-csi--node--driver--srg9b-eth0" Dec 16 03:24:14.814636 containerd[2508]: 2025-12-16 03:24:14.648 [INFO][5427] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="29f898b93fbc1db68eda75bc410779360920c8708ed1a3d6c0d589dc0956d82d" Namespace="calico-system" Pod="csi-node-driver-srg9b" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-csi--node--driver--srg9b-eth0" Dec 16 03:24:14.814701 containerd[2508]: 2025-12-16 03:24:14.648 [INFO][5427] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="29f898b93fbc1db68eda75bc410779360920c8708ed1a3d6c0d589dc0956d82d" Namespace="calico-system" Pod="csi-node-driver-srg9b" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-csi--node--driver--srg9b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--dc3ed46bb5-k8s-csi--node--driver--srg9b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"52f35797-5a94-4b5f-8ac7-147ca2758736", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 23, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-dc3ed46bb5", ContainerID:"29f898b93fbc1db68eda75bc410779360920c8708ed1a3d6c0d589dc0956d82d", Pod:"csi-node-driver-srg9b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.98.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4dbc4288518", MAC:"ae:d8:19:cb:9d:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:24:14.814766 containerd[2508]: 2025-12-16 03:24:14.810 [INFO][5427] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="29f898b93fbc1db68eda75bc410779360920c8708ed1a3d6c0d589dc0956d82d" Namespace="calico-system" Pod="csi-node-driver-srg9b" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-csi--node--driver--srg9b-eth0" Dec 16 03:24:14.831000 audit[5452]: NETFILTER_CFG table=filter:132 family=2 entries=36 op=nft_register_chain pid=5452 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:24:14.831000 audit[5452]: SYSCALL arch=c000003e syscall=46 success=yes exit=19576 a0=3 a1=7ffe8ce13d40 a2=0 a3=7ffe8ce13d2c items=0 ppid=5172 pid=5452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:14.831000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:24:14.859499 containerd[2508]: time="2025-12-16T03:24:14.859462255Z" level=info msg="connecting to shim 29f898b93fbc1db68eda75bc410779360920c8708ed1a3d6c0d589dc0956d82d" address="unix:///run/containerd/s/826aebab97a3f73c2fe13ae633eb795b62cef47fd2509904db2c68cff84cd635" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:24:14.884322 systemd[1]: Started cri-containerd-29f898b93fbc1db68eda75bc410779360920c8708ed1a3d6c0d589dc0956d82d.scope - libcontainer container 29f898b93fbc1db68eda75bc410779360920c8708ed1a3d6c0d589dc0956d82d. Dec 16 03:24:14.891000 audit: BPF prog-id=235 op=LOAD Dec 16 03:24:14.891000 audit: BPF prog-id=236 op=LOAD Dec 16 03:24:14.891000 audit[5472]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5461 pid=5472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:14.891000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239663839386239336662633164623638656461373562633431303737 Dec 16 03:24:14.891000 audit: BPF prog-id=236 op=UNLOAD Dec 16 03:24:14.891000 audit[5472]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5461 pid=5472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:14.891000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239663839386239336662633164623638656461373562633431303737 Dec 16 03:24:14.891000 audit: BPF prog-id=237 op=LOAD Dec 16 03:24:14.891000 audit[5472]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5461 pid=5472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:14.891000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239663839386239336662633164623638656461373562633431303737 Dec 16 03:24:14.891000 audit: BPF prog-id=238 op=LOAD Dec 16 03:24:14.891000 audit[5472]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5461 pid=5472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:14.891000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239663839386239336662633164623638656461373562633431303737 Dec 16 03:24:14.891000 audit: BPF prog-id=238 op=UNLOAD Dec 16 03:24:14.891000 audit[5472]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5461 pid=5472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:14.891000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239663839386239336662633164623638656461373562633431303737 Dec 16 03:24:14.891000 audit: BPF prog-id=237 op=UNLOAD Dec 16 03:24:14.891000 audit[5472]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5461 pid=5472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:14.891000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239663839386239336662633164623638656461373562633431303737 Dec 16 03:24:14.891000 audit: BPF prog-id=239 op=LOAD Dec 16 03:24:14.891000 audit[5472]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5461 pid=5472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:14.891000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239663839386239336662633164623638656461373562633431303737 Dec 16 03:24:14.908104 containerd[2508]: time="2025-12-16T03:24:14.908010956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-srg9b,Uid:52f35797-5a94-4b5f-8ac7-147ca2758736,Namespace:calico-system,Attempt:0,} returns sandbox id \"29f898b93fbc1db68eda75bc410779360920c8708ed1a3d6c0d589dc0956d82d\"" Dec 16 03:24:14.912187 containerd[2508]: time="2025-12-16T03:24:14.911699583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 03:24:15.159105 containerd[2508]: time="2025-12-16T03:24:15.158957602Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:24:15.161388 containerd[2508]: time="2025-12-16T03:24:15.161337274Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 03:24:15.161496 containerd[2508]: time="2025-12-16T03:24:15.161339149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 03:24:15.161695 kubelet[3995]: E1216 03:24:15.161644 3995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:24:15.162037 kubelet[3995]: E1216 03:24:15.161708 3995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:24:15.162037 kubelet[3995]: E1216 03:24:15.161851 3995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9ks26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-srg9b_calico-system(52f35797-5a94-4b5f-8ac7-147ca2758736): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 03:24:15.163946 containerd[2508]: time="2025-12-16T03:24:15.163914056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 03:24:15.346715 containerd[2508]: time="2025-12-16T03:24:15.346615456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85c7d9d48b-hc6qj,Uid:e0164474-95e7-4b01-988d-4ae10762d8d3,Namespace:calico-system,Attempt:0,}" Dec 16 03:24:15.346715 containerd[2508]: time="2025-12-16T03:24:15.346662327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-284xb,Uid:17fccc4a-a08c-4495-a01b-bad3cd3eab43,Namespace:calico-system,Attempt:0,}" Dec 16 03:24:15.346952 containerd[2508]: time="2025-12-16T03:24:15.346928738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fqmqs,Uid:c52e994d-d5bc-47b8-904c-c1132e917f17,Namespace:kube-system,Attempt:0,}" Dec 16 03:24:15.347043 containerd[2508]: time="2025-12-16T03:24:15.346615464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69c4bb98b9-88qzw,Uid:453a3c95-d107-4f4e-b7f5-ee250655b168,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:24:15.401925 containerd[2508]: time="2025-12-16T03:24:15.401874328Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:24:15.405195 containerd[2508]: time="2025-12-16T03:24:15.405163083Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 03:24:15.405405 containerd[2508]: time="2025-12-16T03:24:15.405247168Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 03:24:15.405814 kubelet[3995]: E1216 03:24:15.405685 3995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:24:15.405814 kubelet[3995]: E1216 03:24:15.405739 3995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:24:15.405926 kubelet[3995]: E1216 03:24:15.405876 3995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9ks26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-srg9b_calico-system(52f35797-5a94-4b5f-8ac7-147ca2758736): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 03:24:15.407048 kubelet[3995]: E1216 03:24:15.406972 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-srg9b" podUID="52f35797-5a94-4b5f-8ac7-147ca2758736" Dec 16 03:24:15.534071 kubelet[3995]: E1216 03:24:15.533023 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-srg9b" podUID="52f35797-5a94-4b5f-8ac7-147ca2758736" Dec 16 03:24:15.763796 systemd-networkd[2145]: cali360e7631cf9: Link UP Dec 16 03:24:15.764002 systemd-networkd[2145]: cali360e7631cf9: Gained carrier Dec 16 03:24:15.929267 systemd-networkd[2145]: cali4dbc4288518: Gained IPv6LL Dec 16 03:24:15.952999 containerd[2508]: 2025-12-16 03:24:15.507 [INFO][5497] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--dc3ed46bb5-k8s-coredns--674b8bbfcf--fqmqs-eth0 coredns-674b8bbfcf- kube-system c52e994d-d5bc-47b8-904c-c1132e917f17 923 0 2025-12-16 03:23:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547.0.0-a-dc3ed46bb5 coredns-674b8bbfcf-fqmqs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali360e7631cf9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="bf095cf0d72c7ba45182fbe84ad2e10518090cc8f41692695f6843ed2b673946" Namespace="kube-system" Pod="coredns-674b8bbfcf-fqmqs" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-coredns--674b8bbfcf--fqmqs-" Dec 16 03:24:15.952999 containerd[2508]: 2025-12-16 03:24:15.507 [INFO][5497] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bf095cf0d72c7ba45182fbe84ad2e10518090cc8f41692695f6843ed2b673946" Namespace="kube-system" Pod="coredns-674b8bbfcf-fqmqs" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-coredns--674b8bbfcf--fqmqs-eth0" Dec 16 03:24:15.952999 containerd[2508]: 2025-12-16 03:24:15.564 [INFO][5559] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bf095cf0d72c7ba45182fbe84ad2e10518090cc8f41692695f6843ed2b673946" HandleID="k8s-pod-network.bf095cf0d72c7ba45182fbe84ad2e10518090cc8f41692695f6843ed2b673946" Workload="ci--4547.0.0--a--dc3ed46bb5-k8s-coredns--674b8bbfcf--fqmqs-eth0" Dec 16 03:24:15.953245 containerd[2508]: 2025-12-16 03:24:15.564 [INFO][5559] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bf095cf0d72c7ba45182fbe84ad2e10518090cc8f41692695f6843ed2b673946" HandleID="k8s-pod-network.bf095cf0d72c7ba45182fbe84ad2e10518090cc8f41692695f6843ed2b673946" Workload="ci--4547.0.0--a--dc3ed46bb5-k8s-coredns--674b8bbfcf--fqmqs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b3300), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547.0.0-a-dc3ed46bb5", "pod":"coredns-674b8bbfcf-fqmqs", "timestamp":"2025-12-16 03:24:15.564255219 +0000 UTC"}, Hostname:"ci-4547.0.0-a-dc3ed46bb5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:24:15.953245 containerd[2508]: 2025-12-16 03:24:15.564 [INFO][5559] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:24:15.953245 containerd[2508]: 2025-12-16 03:24:15.564 [INFO][5559] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:24:15.953245 containerd[2508]: 2025-12-16 03:24:15.564 [INFO][5559] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-dc3ed46bb5' Dec 16 03:24:15.953245 containerd[2508]: 2025-12-16 03:24:15.723 [INFO][5559] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bf095cf0d72c7ba45182fbe84ad2e10518090cc8f41692695f6843ed2b673946" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:15.953245 containerd[2508]: 2025-12-16 03:24:15.730 [INFO][5559] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:15.953245 containerd[2508]: 2025-12-16 03:24:15.733 [INFO][5559] ipam/ipam.go 511: Trying affinity for 192.168.98.128/26 host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:15.953245 containerd[2508]: 2025-12-16 03:24:15.734 [INFO][5559] ipam/ipam.go 158: Attempting to load block cidr=192.168.98.128/26 host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:15.953245 containerd[2508]: 2025-12-16 03:24:15.736 [INFO][5559] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.98.128/26 host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:15.953471 containerd[2508]: 2025-12-16 03:24:15.736 [INFO][5559] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.98.128/26 handle="k8s-pod-network.bf095cf0d72c7ba45182fbe84ad2e10518090cc8f41692695f6843ed2b673946" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:15.953471 containerd[2508]: 2025-12-16 03:24:15.737 [INFO][5559] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bf095cf0d72c7ba45182fbe84ad2e10518090cc8f41692695f6843ed2b673946 Dec 16 03:24:15.953471 containerd[2508]: 2025-12-16 03:24:15.745 [INFO][5559] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.98.128/26 handle="k8s-pod-network.bf095cf0d72c7ba45182fbe84ad2e10518090cc8f41692695f6843ed2b673946" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:15.953471 containerd[2508]: 2025-12-16 03:24:15.757 [INFO][5559] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.98.131/26] block=192.168.98.128/26 handle="k8s-pod-network.bf095cf0d72c7ba45182fbe84ad2e10518090cc8f41692695f6843ed2b673946" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:15.953471 containerd[2508]: 2025-12-16 03:24:15.757 [INFO][5559] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.98.131/26] handle="k8s-pod-network.bf095cf0d72c7ba45182fbe84ad2e10518090cc8f41692695f6843ed2b673946" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:15.953471 containerd[2508]: 2025-12-16 03:24:15.757 [INFO][5559] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:24:15.953471 containerd[2508]: 2025-12-16 03:24:15.757 [INFO][5559] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.98.131/26] IPv6=[] ContainerID="bf095cf0d72c7ba45182fbe84ad2e10518090cc8f41692695f6843ed2b673946" HandleID="k8s-pod-network.bf095cf0d72c7ba45182fbe84ad2e10518090cc8f41692695f6843ed2b673946" Workload="ci--4547.0.0--a--dc3ed46bb5-k8s-coredns--674b8bbfcf--fqmqs-eth0" Dec 16 03:24:15.953630 containerd[2508]: 2025-12-16 03:24:15.760 [INFO][5497] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bf095cf0d72c7ba45182fbe84ad2e10518090cc8f41692695f6843ed2b673946" Namespace="kube-system" Pod="coredns-674b8bbfcf-fqmqs" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-coredns--674b8bbfcf--fqmqs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--dc3ed46bb5-k8s-coredns--674b8bbfcf--fqmqs-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c52e994d-d5bc-47b8-904c-c1132e917f17", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 23, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-dc3ed46bb5", ContainerID:"", Pod:"coredns-674b8bbfcf-fqmqs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.98.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali360e7631cf9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:24:15.953630 containerd[2508]: 2025-12-16 03:24:15.760 [INFO][5497] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.131/32] ContainerID="bf095cf0d72c7ba45182fbe84ad2e10518090cc8f41692695f6843ed2b673946" Namespace="kube-system" Pod="coredns-674b8bbfcf-fqmqs" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-coredns--674b8bbfcf--fqmqs-eth0" Dec 16 03:24:15.953630 containerd[2508]: 2025-12-16 03:24:15.760 [INFO][5497] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali360e7631cf9 ContainerID="bf095cf0d72c7ba45182fbe84ad2e10518090cc8f41692695f6843ed2b673946" Namespace="kube-system" Pod="coredns-674b8bbfcf-fqmqs" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-coredns--674b8bbfcf--fqmqs-eth0" Dec 16 03:24:15.953630 containerd[2508]: 2025-12-16 03:24:15.762 [INFO][5497] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bf095cf0d72c7ba45182fbe84ad2e10518090cc8f41692695f6843ed2b673946" Namespace="kube-system" Pod="coredns-674b8bbfcf-fqmqs" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-coredns--674b8bbfcf--fqmqs-eth0" Dec 16 03:24:15.953630 containerd[2508]: 2025-12-16 03:24:15.763 [INFO][5497] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bf095cf0d72c7ba45182fbe84ad2e10518090cc8f41692695f6843ed2b673946" Namespace="kube-system" Pod="coredns-674b8bbfcf-fqmqs" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-coredns--674b8bbfcf--fqmqs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--dc3ed46bb5-k8s-coredns--674b8bbfcf--fqmqs-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c52e994d-d5bc-47b8-904c-c1132e917f17", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 23, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-dc3ed46bb5", ContainerID:"bf095cf0d72c7ba45182fbe84ad2e10518090cc8f41692695f6843ed2b673946", Pod:"coredns-674b8bbfcf-fqmqs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.98.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali360e7631cf9", MAC:"a2:87:53:4b:2d:18", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:24:15.953630 containerd[2508]: 2025-12-16 03:24:15.949 [INFO][5497] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bf095cf0d72c7ba45182fbe84ad2e10518090cc8f41692695f6843ed2b673946" Namespace="kube-system" Pod="coredns-674b8bbfcf-fqmqs" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-coredns--674b8bbfcf--fqmqs-eth0" Dec 16 03:24:15.978000 audit[5583]: NETFILTER_CFG table=filter:133 family=2 entries=46 op=nft_register_chain pid=5583 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:24:15.978000 audit[5583]: SYSCALL arch=c000003e syscall=46 success=yes exit=23740 a0=3 a1=7ffda6155850 a2=0 a3=7ffda615583c items=0 ppid=5172 pid=5583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:15.978000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:24:15.998534 containerd[2508]: time="2025-12-16T03:24:15.998497537Z" level=info msg="connecting to shim bf095cf0d72c7ba45182fbe84ad2e10518090cc8f41692695f6843ed2b673946" address="unix:///run/containerd/s/78ce216e8a91e4160ba39b4659b04f90d239be47eef82737492f0be10120a188" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:24:16.023329 systemd[1]: Started cri-containerd-bf095cf0d72c7ba45182fbe84ad2e10518090cc8f41692695f6843ed2b673946.scope - libcontainer container bf095cf0d72c7ba45182fbe84ad2e10518090cc8f41692695f6843ed2b673946. Dec 16 03:24:16.031000 audit: BPF prog-id=240 op=LOAD Dec 16 03:24:16.031000 audit: BPF prog-id=241 op=LOAD Dec 16 03:24:16.031000 audit[5603]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5592 pid=5603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:16.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266303935636630643732633762613435313832666265383461643265 Dec 16 03:24:16.031000 audit: BPF prog-id=241 op=UNLOAD Dec 16 03:24:16.031000 audit[5603]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5592 pid=5603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:16.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266303935636630643732633762613435313832666265383461643265 Dec 16 03:24:16.031000 audit: BPF prog-id=242 op=LOAD Dec 16 03:24:16.031000 audit[5603]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5592 pid=5603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:16.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266303935636630643732633762613435313832666265383461643265 Dec 16 03:24:16.031000 audit: BPF prog-id=243 op=LOAD Dec 16 03:24:16.031000 audit[5603]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5592 pid=5603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:16.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266303935636630643732633762613435313832666265383461643265 Dec 16 03:24:16.031000 audit: BPF prog-id=243 op=UNLOAD Dec 16 03:24:16.031000 audit[5603]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5592 pid=5603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:16.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266303935636630643732633762613435313832666265383461643265 Dec 16 03:24:16.031000 audit: BPF prog-id=242 op=UNLOAD Dec 16 03:24:16.031000 audit[5603]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5592 pid=5603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:16.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266303935636630643732633762613435313832666265383461643265 Dec 16 03:24:16.031000 audit: BPF prog-id=244 op=LOAD Dec 16 03:24:16.031000 audit[5603]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5592 pid=5603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:16.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266303935636630643732633762613435313832666265383461643265 Dec 16 03:24:16.063121 containerd[2508]: time="2025-12-16T03:24:16.063091064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fqmqs,Uid:c52e994d-d5bc-47b8-904c-c1132e917f17,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf095cf0d72c7ba45182fbe84ad2e10518090cc8f41692695f6843ed2b673946\"" Dec 16 03:24:16.071664 containerd[2508]: time="2025-12-16T03:24:16.071641522Z" level=info msg="CreateContainer within sandbox \"bf095cf0d72c7ba45182fbe84ad2e10518090cc8f41692695f6843ed2b673946\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 03:24:16.090532 containerd[2508]: time="2025-12-16T03:24:16.090505423Z" level=info msg="Container 1348d86091f0b633ede3e0b86f4fc75a08719b24f3e1af3f2a7bedb95df0632d: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:24:16.101811 containerd[2508]: time="2025-12-16T03:24:16.101786145Z" level=info msg="CreateContainer within sandbox \"bf095cf0d72c7ba45182fbe84ad2e10518090cc8f41692695f6843ed2b673946\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1348d86091f0b633ede3e0b86f4fc75a08719b24f3e1af3f2a7bedb95df0632d\"" Dec 16 03:24:16.102190 containerd[2508]: time="2025-12-16T03:24:16.102169699Z" level=info msg="StartContainer for \"1348d86091f0b633ede3e0b86f4fc75a08719b24f3e1af3f2a7bedb95df0632d\"" Dec 16 03:24:16.102861 containerd[2508]: time="2025-12-16T03:24:16.102829810Z" level=info msg="connecting to shim 1348d86091f0b633ede3e0b86f4fc75a08719b24f3e1af3f2a7bedb95df0632d" address="unix:///run/containerd/s/78ce216e8a91e4160ba39b4659b04f90d239be47eef82737492f0be10120a188" protocol=ttrpc version=3 Dec 16 03:24:16.123322 systemd[1]: Started cri-containerd-1348d86091f0b633ede3e0b86f4fc75a08719b24f3e1af3f2a7bedb95df0632d.scope - libcontainer container 1348d86091f0b633ede3e0b86f4fc75a08719b24f3e1af3f2a7bedb95df0632d. Dec 16 03:24:16.130000 audit: BPF prog-id=245 op=LOAD Dec 16 03:24:16.131000 audit: BPF prog-id=246 op=LOAD Dec 16 03:24:16.131000 audit[5629]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5592 pid=5629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:16.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133343864383630393166306236333365646533653062383666346663 Dec 16 03:24:16.131000 audit: BPF prog-id=246 op=UNLOAD Dec 16 03:24:16.131000 audit[5629]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5592 pid=5629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:16.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133343864383630393166306236333365646533653062383666346663 Dec 16 03:24:16.131000 audit: BPF prog-id=247 op=LOAD Dec 16 03:24:16.131000 audit[5629]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5592 pid=5629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:16.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133343864383630393166306236333365646533653062383666346663 Dec 16 03:24:16.131000 audit: BPF prog-id=248 op=LOAD Dec 16 03:24:16.131000 audit[5629]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5592 pid=5629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:16.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133343864383630393166306236333365646533653062383666346663 Dec 16 03:24:16.131000 audit: BPF prog-id=248 op=UNLOAD Dec 16 03:24:16.131000 audit[5629]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5592 pid=5629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:16.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133343864383630393166306236333365646533653062383666346663 Dec 16 03:24:16.131000 audit: BPF prog-id=247 op=UNLOAD Dec 16 03:24:16.131000 audit[5629]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5592 pid=5629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:16.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133343864383630393166306236333365646533653062383666346663 Dec 16 03:24:16.131000 audit: BPF prog-id=249 op=LOAD Dec 16 03:24:16.131000 audit[5629]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5592 pid=5629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:16.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133343864383630393166306236333365646533653062383666346663 Dec 16 03:24:16.150019 containerd[2508]: time="2025-12-16T03:24:16.149973049Z" level=info msg="StartContainer for \"1348d86091f0b633ede3e0b86f4fc75a08719b24f3e1af3f2a7bedb95df0632d\" returns successfully" Dec 16 03:24:16.167466 systemd-networkd[2145]: califaf6be1a7d5: Link UP Dec 16 03:24:16.167691 systemd-networkd[2145]: califaf6be1a7d5: Gained carrier Dec 16 03:24:16.346824 containerd[2508]: time="2025-12-16T03:24:16.346672410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86474dbd54-65v57,Uid:928c764d-cf1a-4e24-874a-b4bd241b86e5,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:24:16.347043 containerd[2508]: time="2025-12-16T03:24:16.346672423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dvbkk,Uid:b18239fa-27f6-46f2-8f55-8e660ec10a40,Namespace:kube-system,Attempt:0,}" Dec 16 03:24:16.380743 containerd[2508]: 2025-12-16 03:24:15.466 [INFO][5509] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--69c4bb98b9--88qzw-eth0 calico-apiserver-69c4bb98b9- calico-apiserver 453a3c95-d107-4f4e-b7f5-ee250655b168 931 0 2025-12-16 03:23:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69c4bb98b9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547.0.0-a-dc3ed46bb5 calico-apiserver-69c4bb98b9-88qzw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califaf6be1a7d5 [] [] }} ContainerID="b303569b226acbaeee6c45a67cd4b338a70d3f48eb7168f7d9861c7ffe1c7073" Namespace="calico-apiserver" Pod="calico-apiserver-69c4bb98b9-88qzw" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--69c4bb98b9--88qzw-" Dec 16 03:24:16.380743 containerd[2508]: 2025-12-16 03:24:15.466 [INFO][5509] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b303569b226acbaeee6c45a67cd4b338a70d3f48eb7168f7d9861c7ffe1c7073" Namespace="calico-apiserver" Pod="calico-apiserver-69c4bb98b9-88qzw" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--69c4bb98b9--88qzw-eth0" Dec 16 03:24:16.380743 containerd[2508]: 2025-12-16 03:24:15.567 [INFO][5549] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b303569b226acbaeee6c45a67cd4b338a70d3f48eb7168f7d9861c7ffe1c7073" HandleID="k8s-pod-network.b303569b226acbaeee6c45a67cd4b338a70d3f48eb7168f7d9861c7ffe1c7073" Workload="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--69c4bb98b9--88qzw-eth0" Dec 16 03:24:16.380743 containerd[2508]: 2025-12-16 03:24:15.567 [INFO][5549] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b303569b226acbaeee6c45a67cd4b338a70d3f48eb7168f7d9861c7ffe1c7073" HandleID="k8s-pod-network.b303569b226acbaeee6c45a67cd4b338a70d3f48eb7168f7d9861c7ffe1c7073" Workload="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--69c4bb98b9--88qzw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5710), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547.0.0-a-dc3ed46bb5", "pod":"calico-apiserver-69c4bb98b9-88qzw", "timestamp":"2025-12-16 03:24:15.567323036 +0000 UTC"}, Hostname:"ci-4547.0.0-a-dc3ed46bb5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:24:16.380743 containerd[2508]: 2025-12-16 03:24:15.567 [INFO][5549] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:24:16.380743 containerd[2508]: 2025-12-16 03:24:15.757 [INFO][5549] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:24:16.380743 containerd[2508]: 2025-12-16 03:24:15.757 [INFO][5549] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-dc3ed46bb5' Dec 16 03:24:16.380743 containerd[2508]: 2025-12-16 03:24:15.947 [INFO][5549] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b303569b226acbaeee6c45a67cd4b338a70d3f48eb7168f7d9861c7ffe1c7073" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:16.380743 containerd[2508]: 2025-12-16 03:24:15.957 [INFO][5549] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:16.380743 containerd[2508]: 2025-12-16 03:24:15.964 [INFO][5549] ipam/ipam.go 511: Trying affinity for 192.168.98.128/26 host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:16.380743 containerd[2508]: 2025-12-16 03:24:15.967 [INFO][5549] ipam/ipam.go 158: Attempting to load block cidr=192.168.98.128/26 host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:16.380743 containerd[2508]: 2025-12-16 03:24:15.969 [INFO][5549] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.98.128/26 host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:16.380743 containerd[2508]: 2025-12-16 03:24:15.969 [INFO][5549] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.98.128/26 handle="k8s-pod-network.b303569b226acbaeee6c45a67cd4b338a70d3f48eb7168f7d9861c7ffe1c7073" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:16.380743 containerd[2508]: 2025-12-16 03:24:15.971 [INFO][5549] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b303569b226acbaeee6c45a67cd4b338a70d3f48eb7168f7d9861c7ffe1c7073 Dec 16 03:24:16.380743 containerd[2508]: 2025-12-16 03:24:16.012 [INFO][5549] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.98.128/26 handle="k8s-pod-network.b303569b226acbaeee6c45a67cd4b338a70d3f48eb7168f7d9861c7ffe1c7073" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:16.380743 containerd[2508]: 2025-12-16 03:24:16.157 [INFO][5549] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.98.132/26] block=192.168.98.128/26 handle="k8s-pod-network.b303569b226acbaeee6c45a67cd4b338a70d3f48eb7168f7d9861c7ffe1c7073" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:16.380743 containerd[2508]: 2025-12-16 03:24:16.157 [INFO][5549] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.98.132/26] handle="k8s-pod-network.b303569b226acbaeee6c45a67cd4b338a70d3f48eb7168f7d9861c7ffe1c7073" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:16.380743 containerd[2508]: 2025-12-16 03:24:16.157 [INFO][5549] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:24:16.380743 containerd[2508]: 2025-12-16 03:24:16.157 [INFO][5549] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.98.132/26] IPv6=[] ContainerID="b303569b226acbaeee6c45a67cd4b338a70d3f48eb7168f7d9861c7ffe1c7073" HandleID="k8s-pod-network.b303569b226acbaeee6c45a67cd4b338a70d3f48eb7168f7d9861c7ffe1c7073" Workload="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--69c4bb98b9--88qzw-eth0" Dec 16 03:24:16.382449 containerd[2508]: 2025-12-16 03:24:16.160 [INFO][5509] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b303569b226acbaeee6c45a67cd4b338a70d3f48eb7168f7d9861c7ffe1c7073" Namespace="calico-apiserver" Pod="calico-apiserver-69c4bb98b9-88qzw" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--69c4bb98b9--88qzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--69c4bb98b9--88qzw-eth0", GenerateName:"calico-apiserver-69c4bb98b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"453a3c95-d107-4f4e-b7f5-ee250655b168", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 23, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69c4bb98b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-dc3ed46bb5", ContainerID:"", Pod:"calico-apiserver-69c4bb98b9-88qzw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califaf6be1a7d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:24:16.382449 containerd[2508]: 2025-12-16 03:24:16.163 [INFO][5509] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.132/32] ContainerID="b303569b226acbaeee6c45a67cd4b338a70d3f48eb7168f7d9861c7ffe1c7073" Namespace="calico-apiserver" Pod="calico-apiserver-69c4bb98b9-88qzw" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--69c4bb98b9--88qzw-eth0" Dec 16 03:24:16.382449 containerd[2508]: 2025-12-16 03:24:16.163 [INFO][5509] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califaf6be1a7d5 ContainerID="b303569b226acbaeee6c45a67cd4b338a70d3f48eb7168f7d9861c7ffe1c7073" Namespace="calico-apiserver" Pod="calico-apiserver-69c4bb98b9-88qzw" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--69c4bb98b9--88qzw-eth0" Dec 16 03:24:16.382449 containerd[2508]: 2025-12-16 03:24:16.166 [INFO][5509] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b303569b226acbaeee6c45a67cd4b338a70d3f48eb7168f7d9861c7ffe1c7073" Namespace="calico-apiserver" Pod="calico-apiserver-69c4bb98b9-88qzw" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--69c4bb98b9--88qzw-eth0" Dec 16 03:24:16.382449 containerd[2508]: 2025-12-16 03:24:16.166 [INFO][5509] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b303569b226acbaeee6c45a67cd4b338a70d3f48eb7168f7d9861c7ffe1c7073" Namespace="calico-apiserver" Pod="calico-apiserver-69c4bb98b9-88qzw" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--69c4bb98b9--88qzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--69c4bb98b9--88qzw-eth0", GenerateName:"calico-apiserver-69c4bb98b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"453a3c95-d107-4f4e-b7f5-ee250655b168", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 23, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69c4bb98b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-dc3ed46bb5", ContainerID:"b303569b226acbaeee6c45a67cd4b338a70d3f48eb7168f7d9861c7ffe1c7073", Pod:"calico-apiserver-69c4bb98b9-88qzw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califaf6be1a7d5", MAC:"46:44:f4:1d:b3:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:24:16.382449 containerd[2508]: 2025-12-16 03:24:16.370 [INFO][5509] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b303569b226acbaeee6c45a67cd4b338a70d3f48eb7168f7d9861c7ffe1c7073" Namespace="calico-apiserver" Pod="calico-apiserver-69c4bb98b9-88qzw" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--69c4bb98b9--88qzw-eth0" Dec 16 03:24:16.396000 audit[5691]: NETFILTER_CFG table=filter:134 family=2 entries=58 op=nft_register_chain pid=5691 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:24:16.396000 audit[5691]: SYSCALL arch=c000003e syscall=46 success=yes exit=30584 a0=3 a1=7ffd4b311010 a2=0 a3=7ffd4b310ffc items=0 ppid=5172 pid=5691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:16.396000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:24:16.437488 containerd[2508]: time="2025-12-16T03:24:16.437453699Z" level=info msg="connecting to shim b303569b226acbaeee6c45a67cd4b338a70d3f48eb7168f7d9861c7ffe1c7073" address="unix:///run/containerd/s/ac41a8575e59ca5bbd9f9479d943420bf9a821787bd1297b09803f1c1eacd689" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:24:16.467312 systemd[1]: Started cri-containerd-b303569b226acbaeee6c45a67cd4b338a70d3f48eb7168f7d9861c7ffe1c7073.scope - libcontainer container b303569b226acbaeee6c45a67cd4b338a70d3f48eb7168f7d9861c7ffe1c7073. Dec 16 03:24:16.475000 audit: BPF prog-id=250 op=LOAD Dec 16 03:24:16.475000 audit: BPF prog-id=251 op=LOAD Dec 16 03:24:16.475000 audit[5716]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5705 pid=5716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:16.475000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233303335363962323236616362616565653663343561363763643462 Dec 16 03:24:16.475000 audit: BPF prog-id=251 op=UNLOAD Dec 16 03:24:16.475000 audit[5716]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5705 pid=5716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:16.475000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233303335363962323236616362616565653663343561363763643462 Dec 16 03:24:16.476000 audit: BPF prog-id=252 op=LOAD Dec 16 03:24:16.476000 audit[5716]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5705 pid=5716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:16.476000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233303335363962323236616362616565653663343561363763643462 Dec 16 03:24:16.476000 audit: BPF prog-id=253 op=LOAD Dec 16 03:24:16.476000 audit[5716]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5705 pid=5716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:16.476000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233303335363962323236616362616565653663343561363763643462 Dec 16 03:24:16.476000 audit: BPF prog-id=253 op=UNLOAD Dec 16 03:24:16.476000 audit[5716]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5705 pid=5716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:16.476000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233303335363962323236616362616565653663343561363763643462 Dec 16 03:24:16.476000 audit: BPF prog-id=252 op=UNLOAD Dec 16 03:24:16.476000 audit[5716]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5705 pid=5716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:16.476000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233303335363962323236616362616565653663343561363763643462 Dec 16 03:24:16.476000 audit: BPF prog-id=254 op=LOAD Dec 16 03:24:16.476000 audit[5716]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5705 pid=5716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:16.476000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233303335363962323236616362616565653663343561363763643462 Dec 16 03:24:16.509293 containerd[2508]: time="2025-12-16T03:24:16.509262525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69c4bb98b9-88qzw,Uid:453a3c95-d107-4f4e-b7f5-ee250655b168,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b303569b226acbaeee6c45a67cd4b338a70d3f48eb7168f7d9861c7ffe1c7073\"" Dec 16 03:24:16.511358 containerd[2508]: time="2025-12-16T03:24:16.511337723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:24:16.533126 kubelet[3995]: E1216 03:24:16.532862 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-srg9b" podUID="52f35797-5a94-4b5f-8ac7-147ca2758736" Dec 16 03:24:16.751432 containerd[2508]: time="2025-12-16T03:24:16.751296124Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:24:16.754438 containerd[2508]: time="2025-12-16T03:24:16.754383296Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:24:16.754554 containerd[2508]: time="2025-12-16T03:24:16.754401289Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:24:16.754697 kubelet[3995]: E1216 03:24:16.754653 3995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:24:16.754772 kubelet[3995]: E1216 03:24:16.754703 3995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:24:16.754891 kubelet[3995]: E1216 03:24:16.754858 3995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8kctb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69c4bb98b9-88qzw_calico-apiserver(453a3c95-d107-4f4e-b7f5-ee250655b168): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:24:16.756162 kubelet[3995]: E1216 03:24:16.756117 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69c4bb98b9-88qzw" podUID="453a3c95-d107-4f4e-b7f5-ee250655b168" Dec 16 03:24:17.160000 audit[5758]: NETFILTER_CFG table=filter:135 family=2 entries=20 op=nft_register_rule pid=5758 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:24:17.160000 audit[5758]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffcad83f50 a2=0 a3=7fffcad83f3c items=0 ppid=4100 pid=5758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:17.160000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:24:17.166000 audit[5758]: NETFILTER_CFG table=nat:136 family=2 entries=14 op=nft_register_rule pid=5758 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:24:17.166000 audit[5758]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fffcad83f50 a2=0 a3=0 items=0 ppid=4100 pid=5758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:17.166000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:24:17.273294 systemd-networkd[2145]: cali360e7631cf9: Gained IPv6LL Dec 16 03:24:17.309807 kubelet[3995]: I1216 03:24:17.308192 3995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-fqmqs" podStartSLOduration=61.30817108 podStartE2EDuration="1m1.30817108s" podCreationTimestamp="2025-12-16 03:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:24:16.851708837 +0000 UTC m=+66.584051286" watchObservedRunningTime="2025-12-16 03:24:17.30817108 +0000 UTC m=+67.040513529" Dec 16 03:24:17.345657 containerd[2508]: time="2025-12-16T03:24:17.345612834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86474dbd54-fphkv,Uid:b0a716ce-6354-47ff-896b-1da783a25f3a,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:24:17.534649 kubelet[3995]: E1216 03:24:17.534491 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69c4bb98b9-88qzw" podUID="453a3c95-d107-4f4e-b7f5-ee250655b168" Dec 16 03:24:17.572000 audit[5771]: NETFILTER_CFG table=filter:137 family=2 entries=17 op=nft_register_rule pid=5771 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:24:17.575296 kernel: kauditd_printk_skb: 207 callbacks suppressed Dec 16 03:24:17.575384 kernel: audit: type=1325 audit(1765855457.572:718): table=filter:137 family=2 entries=17 op=nft_register_rule pid=5771 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:24:17.572000 audit[5771]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcf57390a0 a2=0 a3=7ffcf573908c items=0 ppid=4100 pid=5771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:17.583947 kernel: audit: type=1300 audit(1765855457.572:718): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcf57390a0 a2=0 a3=7ffcf573908c items=0 ppid=4100 pid=5771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:17.572000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:24:17.588064 kernel: audit: type=1327 audit(1765855457.572:718): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:24:17.583000 audit[5771]: NETFILTER_CFG table=nat:138 family=2 entries=35 op=nft_register_chain pid=5771 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:24:17.592713 kernel: audit: type=1325 audit(1765855457.583:719): table=nat:138 family=2 entries=35 op=nft_register_chain pid=5771 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:24:17.600750 kernel: audit: type=1300 audit(1765855457.583:719): arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffcf57390a0 a2=0 a3=7ffcf573908c items=0 ppid=4100 pid=5771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:17.583000 audit[5771]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffcf57390a0 a2=0 a3=7ffcf573908c items=0 ppid=4100 pid=5771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:17.583000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:24:17.604946 kernel: audit: type=1327 audit(1765855457.583:719): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:24:17.926363 systemd-networkd[2145]: cali75561621f8a: Link UP Dec 16 03:24:17.927574 systemd-networkd[2145]: cali75561621f8a: Gained carrier Dec 16 03:24:17.977257 systemd-networkd[2145]: califaf6be1a7d5: Gained IPv6LL Dec 16 03:24:18.175113 containerd[2508]: 2025-12-16 03:24:15.508 [INFO][5521] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--dc3ed46bb5-k8s-goldmane--666569f655--284xb-eth0 goldmane-666569f655- calico-system 17fccc4a-a08c-4495-a01b-bad3cd3eab43 903 0 2025-12-16 03:23:47 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4547.0.0-a-dc3ed46bb5 goldmane-666569f655-284xb eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali75561621f8a [] [] }} ContainerID="1bdc1b9ed6b2d2a8180c3b5c6a1997409d5d0829b53f76ae2cd676ca0fc8e821" Namespace="calico-system" Pod="goldmane-666569f655-284xb" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-goldmane--666569f655--284xb-" Dec 16 03:24:18.175113 containerd[2508]: 2025-12-16 03:24:15.509 [INFO][5521] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1bdc1b9ed6b2d2a8180c3b5c6a1997409d5d0829b53f76ae2cd676ca0fc8e821" Namespace="calico-system" Pod="goldmane-666569f655-284xb" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-goldmane--666569f655--284xb-eth0" Dec 16 03:24:18.175113 containerd[2508]: 2025-12-16 03:24:15.576 [INFO][5557] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1bdc1b9ed6b2d2a8180c3b5c6a1997409d5d0829b53f76ae2cd676ca0fc8e821" HandleID="k8s-pod-network.1bdc1b9ed6b2d2a8180c3b5c6a1997409d5d0829b53f76ae2cd676ca0fc8e821" Workload="ci--4547.0.0--a--dc3ed46bb5-k8s-goldmane--666569f655--284xb-eth0" Dec 16 03:24:18.175113 containerd[2508]: 2025-12-16 03:24:15.576 [INFO][5557] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1bdc1b9ed6b2d2a8180c3b5c6a1997409d5d0829b53f76ae2cd676ca0fc8e821" HandleID="k8s-pod-network.1bdc1b9ed6b2d2a8180c3b5c6a1997409d5d0829b53f76ae2cd676ca0fc8e821" Workload="ci--4547.0.0--a--dc3ed46bb5-k8s-goldmane--666569f655--284xb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f710), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-a-dc3ed46bb5", "pod":"goldmane-666569f655-284xb", "timestamp":"2025-12-16 03:24:15.576095728 +0000 UTC"}, Hostname:"ci-4547.0.0-a-dc3ed46bb5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:24:18.175113 containerd[2508]: 2025-12-16 03:24:15.576 [INFO][5557] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:24:18.175113 containerd[2508]: 2025-12-16 03:24:16.157 [INFO][5557] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:24:18.175113 containerd[2508]: 2025-12-16 03:24:16.157 [INFO][5557] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-dc3ed46bb5' Dec 16 03:24:18.175113 containerd[2508]: 2025-12-16 03:24:16.374 [INFO][5557] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1bdc1b9ed6b2d2a8180c3b5c6a1997409d5d0829b53f76ae2cd676ca0fc8e821" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:18.175113 containerd[2508]: 2025-12-16 03:24:16.459 [INFO][5557] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:18.175113 containerd[2508]: 2025-12-16 03:24:16.647 [INFO][5557] ipam/ipam.go 511: Trying affinity for 192.168.98.128/26 host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:18.175113 containerd[2508]: 2025-12-16 03:24:16.853 [INFO][5557] ipam/ipam.go 158: Attempting to load block cidr=192.168.98.128/26 host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:18.175113 containerd[2508]: 2025-12-16 03:24:17.156 [INFO][5557] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.98.128/26 host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:18.175113 containerd[2508]: 2025-12-16 03:24:17.156 [INFO][5557] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.98.128/26 handle="k8s-pod-network.1bdc1b9ed6b2d2a8180c3b5c6a1997409d5d0829b53f76ae2cd676ca0fc8e821" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:18.175113 containerd[2508]: 2025-12-16 03:24:17.308 [INFO][5557] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1bdc1b9ed6b2d2a8180c3b5c6a1997409d5d0829b53f76ae2cd676ca0fc8e821 Dec 16 03:24:18.175113 containerd[2508]: 2025-12-16 03:24:17.564 [INFO][5557] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.98.128/26 handle="k8s-pod-network.1bdc1b9ed6b2d2a8180c3b5c6a1997409d5d0829b53f76ae2cd676ca0fc8e821" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:18.175113 containerd[2508]: 2025-12-16 03:24:17.917 [INFO][5557] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.98.133/26] block=192.168.98.128/26 handle="k8s-pod-network.1bdc1b9ed6b2d2a8180c3b5c6a1997409d5d0829b53f76ae2cd676ca0fc8e821" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:18.175113 containerd[2508]: 2025-12-16 03:24:17.917 [INFO][5557] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.98.133/26] handle="k8s-pod-network.1bdc1b9ed6b2d2a8180c3b5c6a1997409d5d0829b53f76ae2cd676ca0fc8e821" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:18.175113 containerd[2508]: 2025-12-16 03:24:17.917 [INFO][5557] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:24:18.175113 containerd[2508]: 2025-12-16 03:24:17.917 [INFO][5557] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.98.133/26] IPv6=[] ContainerID="1bdc1b9ed6b2d2a8180c3b5c6a1997409d5d0829b53f76ae2cd676ca0fc8e821" HandleID="k8s-pod-network.1bdc1b9ed6b2d2a8180c3b5c6a1997409d5d0829b53f76ae2cd676ca0fc8e821" Workload="ci--4547.0.0--a--dc3ed46bb5-k8s-goldmane--666569f655--284xb-eth0" Dec 16 03:24:18.177692 containerd[2508]: 2025-12-16 03:24:17.921 [INFO][5521] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1bdc1b9ed6b2d2a8180c3b5c6a1997409d5d0829b53f76ae2cd676ca0fc8e821" Namespace="calico-system" Pod="goldmane-666569f655-284xb" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-goldmane--666569f655--284xb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--dc3ed46bb5-k8s-goldmane--666569f655--284xb-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"17fccc4a-a08c-4495-a01b-bad3cd3eab43", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 23, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-dc3ed46bb5", ContainerID:"", Pod:"goldmane-666569f655-284xb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.98.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali75561621f8a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:24:18.177692 containerd[2508]: 2025-12-16 03:24:17.921 [INFO][5521] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.133/32] ContainerID="1bdc1b9ed6b2d2a8180c3b5c6a1997409d5d0829b53f76ae2cd676ca0fc8e821" Namespace="calico-system" Pod="goldmane-666569f655-284xb" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-goldmane--666569f655--284xb-eth0" Dec 16 03:24:18.177692 containerd[2508]: 2025-12-16 03:24:17.921 [INFO][5521] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali75561621f8a ContainerID="1bdc1b9ed6b2d2a8180c3b5c6a1997409d5d0829b53f76ae2cd676ca0fc8e821" Namespace="calico-system" Pod="goldmane-666569f655-284xb" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-goldmane--666569f655--284xb-eth0" Dec 16 03:24:18.177692 containerd[2508]: 2025-12-16 03:24:17.930 [INFO][5521] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1bdc1b9ed6b2d2a8180c3b5c6a1997409d5d0829b53f76ae2cd676ca0fc8e821" Namespace="calico-system" Pod="goldmane-666569f655-284xb" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-goldmane--666569f655--284xb-eth0" Dec 16 03:24:18.177692 containerd[2508]: 2025-12-16 03:24:17.931 [INFO][5521] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1bdc1b9ed6b2d2a8180c3b5c6a1997409d5d0829b53f76ae2cd676ca0fc8e821" Namespace="calico-system" Pod="goldmane-666569f655-284xb" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-goldmane--666569f655--284xb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--dc3ed46bb5-k8s-goldmane--666569f655--284xb-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"17fccc4a-a08c-4495-a01b-bad3cd3eab43", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 23, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-dc3ed46bb5", ContainerID:"1bdc1b9ed6b2d2a8180c3b5c6a1997409d5d0829b53f76ae2cd676ca0fc8e821", Pod:"goldmane-666569f655-284xb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.98.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali75561621f8a", MAC:"9e:6a:33:b9:51:45", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:24:18.177692 containerd[2508]: 2025-12-16 03:24:18.170 [INFO][5521] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1bdc1b9ed6b2d2a8180c3b5c6a1997409d5d0829b53f76ae2cd676ca0fc8e821" Namespace="calico-system" Pod="goldmane-666569f655-284xb" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-goldmane--666569f655--284xb-eth0" Dec 16 03:24:18.190000 audit[5787]: NETFILTER_CFG table=filter:139 family=2 entries=56 op=nft_register_chain pid=5787 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:24:18.190000 audit[5787]: SYSCALL arch=c000003e syscall=46 success=yes exit=28744 a0=3 a1=7ffc411c26a0 a2=0 a3=7ffc411c268c items=0 ppid=5172 pid=5787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:18.197986 kernel: audit: type=1325 audit(1765855458.190:720): table=filter:139 family=2 entries=56 op=nft_register_chain pid=5787 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:24:18.198036 kernel: audit: type=1300 audit(1765855458.190:720): arch=c000003e syscall=46 success=yes exit=28744 a0=3 a1=7ffc411c26a0 a2=0 a3=7ffc411c268c items=0 ppid=5172 pid=5787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:18.190000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:24:18.202095 kernel: audit: type=1327 audit(1765855458.190:720): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:24:18.228035 containerd[2508]: time="2025-12-16T03:24:18.227981389Z" level=info msg="connecting to shim 1bdc1b9ed6b2d2a8180c3b5c6a1997409d5d0829b53f76ae2cd676ca0fc8e821" address="unix:///run/containerd/s/bd0f55a10b65c09306b2370a2361722dd572ef002734858a2b9fdf22d098e129" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:24:18.261328 systemd[1]: Started cri-containerd-1bdc1b9ed6b2d2a8180c3b5c6a1997409d5d0829b53f76ae2cd676ca0fc8e821.scope - libcontainer container 1bdc1b9ed6b2d2a8180c3b5c6a1997409d5d0829b53f76ae2cd676ca0fc8e821. Dec 16 03:24:18.272000 audit: BPF prog-id=255 op=LOAD Dec 16 03:24:18.273000 audit: BPF prog-id=256 op=LOAD Dec 16 03:24:18.273000 audit[5812]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=5801 pid=5812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:18.275150 kernel: audit: type=1334 audit(1765855458.272:721): prog-id=255 op=LOAD Dec 16 03:24:18.273000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162646331623965643662326432613831383063336235633661313939 Dec 16 03:24:18.273000 audit: BPF prog-id=256 op=UNLOAD Dec 16 03:24:18.273000 audit[5812]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5801 pid=5812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:18.273000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162646331623965643662326432613831383063336235633661313939 Dec 16 03:24:18.273000 audit: BPF prog-id=257 op=LOAD Dec 16 03:24:18.273000 audit[5812]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=5801 pid=5812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:18.273000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162646331623965643662326432613831383063336235633661313939 Dec 16 03:24:18.273000 audit: BPF prog-id=258 op=LOAD Dec 16 03:24:18.273000 audit[5812]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=5801 pid=5812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:18.273000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162646331623965643662326432613831383063336235633661313939 Dec 16 03:24:18.273000 audit: BPF prog-id=258 op=UNLOAD Dec 16 03:24:18.273000 audit[5812]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5801 pid=5812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:18.273000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162646331623965643662326432613831383063336235633661313939 Dec 16 03:24:18.273000 audit: BPF prog-id=257 op=UNLOAD Dec 16 03:24:18.273000 audit[5812]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5801 pid=5812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:18.273000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162646331623965643662326432613831383063336235633661313939 Dec 16 03:24:18.273000 audit: BPF prog-id=259 op=LOAD Dec 16 03:24:18.273000 audit[5812]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=5801 pid=5812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:18.273000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162646331623965643662326432613831383063336235633661313939 Dec 16 03:24:18.306815 containerd[2508]: time="2025-12-16T03:24:18.306782249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-284xb,Uid:17fccc4a-a08c-4495-a01b-bad3cd3eab43,Namespace:calico-system,Attempt:0,} returns sandbox id \"1bdc1b9ed6b2d2a8180c3b5c6a1997409d5d0829b53f76ae2cd676ca0fc8e821\"" Dec 16 03:24:18.308219 containerd[2508]: time="2025-12-16T03:24:18.308183395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 03:24:18.420000 audit[5841]: NETFILTER_CFG table=filter:140 family=2 entries=14 op=nft_register_rule pid=5841 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:24:18.420000 audit[5841]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffee5dbfae0 a2=0 a3=7ffee5dbfacc items=0 ppid=4100 pid=5841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:18.420000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:24:18.425000 audit[5841]: NETFILTER_CFG table=nat:141 family=2 entries=20 op=nft_register_rule pid=5841 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:24:18.425000 audit[5841]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffee5dbfae0 a2=0 a3=7ffee5dbfacc items=0 ppid=4100 pid=5841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:18.425000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:24:18.567905 containerd[2508]: time="2025-12-16T03:24:18.567789745Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:24:18.570295 containerd[2508]: time="2025-12-16T03:24:18.570263150Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 03:24:18.570407 containerd[2508]: time="2025-12-16T03:24:18.570350135Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 03:24:18.570534 kubelet[3995]: E1216 03:24:18.570486 3995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:24:18.570899 kubelet[3995]: E1216 03:24:18.570543 3995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:24:18.570899 kubelet[3995]: E1216 03:24:18.570690 3995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-27fds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-284xb_calico-system(17fccc4a-a08c-4495-a01b-bad3cd3eab43): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 03:24:18.571880 kubelet[3995]: E1216 03:24:18.571842 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-284xb" podUID="17fccc4a-a08c-4495-a01b-bad3cd3eab43" Dec 16 03:24:18.697623 systemd-networkd[2145]: cali0b0315b3d18: Link UP Dec 16 03:24:18.698576 systemd-networkd[2145]: cali0b0315b3d18: Gained carrier Dec 16 03:24:18.769393 containerd[2508]: 2025-12-16 03:24:15.468 [INFO][5513] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--dc3ed46bb5-k8s-calico--kube--controllers--85c7d9d48b--hc6qj-eth0 calico-kube-controllers-85c7d9d48b- calico-system e0164474-95e7-4b01-988d-4ae10762d8d3 930 0 2025-12-16 03:23:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:85c7d9d48b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4547.0.0-a-dc3ed46bb5 calico-kube-controllers-85c7d9d48b-hc6qj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0b0315b3d18 [] [] }} ContainerID="76820ce8861e79e6438415fa539ba267993ef0340b42ab0b44b92e5ac8ce1efb" Namespace="calico-system" Pod="calico-kube-controllers-85c7d9d48b-hc6qj" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--kube--controllers--85c7d9d48b--hc6qj-" Dec 16 03:24:18.769393 containerd[2508]: 2025-12-16 03:24:15.468 [INFO][5513] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="76820ce8861e79e6438415fa539ba267993ef0340b42ab0b44b92e5ac8ce1efb" Namespace="calico-system" Pod="calico-kube-controllers-85c7d9d48b-hc6qj" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--kube--controllers--85c7d9d48b--hc6qj-eth0" Dec 16 03:24:18.769393 containerd[2508]: 2025-12-16 03:24:15.578 [INFO][5548] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="76820ce8861e79e6438415fa539ba267993ef0340b42ab0b44b92e5ac8ce1efb" HandleID="k8s-pod-network.76820ce8861e79e6438415fa539ba267993ef0340b42ab0b44b92e5ac8ce1efb" Workload="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--kube--controllers--85c7d9d48b--hc6qj-eth0" Dec 16 03:24:18.769393 containerd[2508]: 2025-12-16 03:24:15.578 [INFO][5548] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="76820ce8861e79e6438415fa539ba267993ef0340b42ab0b44b92e5ac8ce1efb" HandleID="k8s-pod-network.76820ce8861e79e6438415fa539ba267993ef0340b42ab0b44b92e5ac8ce1efb" Workload="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--kube--controllers--85c7d9d48b--hc6qj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5350), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547.0.0-a-dc3ed46bb5", "pod":"calico-kube-controllers-85c7d9d48b-hc6qj", "timestamp":"2025-12-16 03:24:15.578089984 +0000 UTC"}, Hostname:"ci-4547.0.0-a-dc3ed46bb5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:24:18.769393 containerd[2508]: 2025-12-16 03:24:15.578 [INFO][5548] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:24:18.769393 containerd[2508]: 2025-12-16 03:24:17.918 [INFO][5548] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:24:18.769393 containerd[2508]: 2025-12-16 03:24:17.918 [INFO][5548] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-dc3ed46bb5' Dec 16 03:24:18.769393 containerd[2508]: 2025-12-16 03:24:18.173 [INFO][5548] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.76820ce8861e79e6438415fa539ba267993ef0340b42ab0b44b92e5ac8ce1efb" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:18.769393 containerd[2508]: 2025-12-16 03:24:18.507 [INFO][5548] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:18.769393 containerd[2508]: 2025-12-16 03:24:18.666 [INFO][5548] ipam/ipam.go 511: Trying affinity for 192.168.98.128/26 host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:18.769393 containerd[2508]: 2025-12-16 03:24:18.670 [INFO][5548] ipam/ipam.go 158: Attempting to load block cidr=192.168.98.128/26 host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:18.769393 containerd[2508]: 2025-12-16 03:24:18.673 [INFO][5548] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.98.128/26 host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:18.769393 containerd[2508]: 2025-12-16 03:24:18.673 [INFO][5548] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.98.128/26 handle="k8s-pod-network.76820ce8861e79e6438415fa539ba267993ef0340b42ab0b44b92e5ac8ce1efb" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:18.769393 containerd[2508]: 2025-12-16 03:24:18.675 [INFO][5548] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.76820ce8861e79e6438415fa539ba267993ef0340b42ab0b44b92e5ac8ce1efb Dec 16 03:24:18.769393 containerd[2508]: 2025-12-16 03:24:18.679 [INFO][5548] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.98.128/26 handle="k8s-pod-network.76820ce8861e79e6438415fa539ba267993ef0340b42ab0b44b92e5ac8ce1efb" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:18.769393 containerd[2508]: 2025-12-16 03:24:18.688 [INFO][5548] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.98.134/26] block=192.168.98.128/26 handle="k8s-pod-network.76820ce8861e79e6438415fa539ba267993ef0340b42ab0b44b92e5ac8ce1efb" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:18.769393 containerd[2508]: 2025-12-16 03:24:18.689 [INFO][5548] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.98.134/26] handle="k8s-pod-network.76820ce8861e79e6438415fa539ba267993ef0340b42ab0b44b92e5ac8ce1efb" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:18.769393 containerd[2508]: 2025-12-16 03:24:18.689 [INFO][5548] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:24:18.769393 containerd[2508]: 2025-12-16 03:24:18.689 [INFO][5548] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.98.134/26] IPv6=[] ContainerID="76820ce8861e79e6438415fa539ba267993ef0340b42ab0b44b92e5ac8ce1efb" HandleID="k8s-pod-network.76820ce8861e79e6438415fa539ba267993ef0340b42ab0b44b92e5ac8ce1efb" Workload="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--kube--controllers--85c7d9d48b--hc6qj-eth0" Dec 16 03:24:18.769985 containerd[2508]: 2025-12-16 03:24:18.691 [INFO][5513] cni-plugin/k8s.go 418: Populated endpoint ContainerID="76820ce8861e79e6438415fa539ba267993ef0340b42ab0b44b92e5ac8ce1efb" Namespace="calico-system" Pod="calico-kube-controllers-85c7d9d48b-hc6qj" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--kube--controllers--85c7d9d48b--hc6qj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--dc3ed46bb5-k8s-calico--kube--controllers--85c7d9d48b--hc6qj-eth0", GenerateName:"calico-kube-controllers-85c7d9d48b-", Namespace:"calico-system", SelfLink:"", UID:"e0164474-95e7-4b01-988d-4ae10762d8d3", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 23, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85c7d9d48b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-dc3ed46bb5", ContainerID:"", Pod:"calico-kube-controllers-85c7d9d48b-hc6qj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.98.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0b0315b3d18", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:24:18.769985 containerd[2508]: 2025-12-16 03:24:18.692 [INFO][5513] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.134/32] ContainerID="76820ce8861e79e6438415fa539ba267993ef0340b42ab0b44b92e5ac8ce1efb" Namespace="calico-system" Pod="calico-kube-controllers-85c7d9d48b-hc6qj" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--kube--controllers--85c7d9d48b--hc6qj-eth0" Dec 16 03:24:18.769985 containerd[2508]: 2025-12-16 03:24:18.692 [INFO][5513] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0b0315b3d18 ContainerID="76820ce8861e79e6438415fa539ba267993ef0340b42ab0b44b92e5ac8ce1efb" Namespace="calico-system" Pod="calico-kube-controllers-85c7d9d48b-hc6qj" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--kube--controllers--85c7d9d48b--hc6qj-eth0" Dec 16 03:24:18.769985 containerd[2508]: 2025-12-16 03:24:18.698 [INFO][5513] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="76820ce8861e79e6438415fa539ba267993ef0340b42ab0b44b92e5ac8ce1efb" Namespace="calico-system" Pod="calico-kube-controllers-85c7d9d48b-hc6qj" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--kube--controllers--85c7d9d48b--hc6qj-eth0" Dec 16 03:24:18.769985 containerd[2508]: 2025-12-16 03:24:18.700 [INFO][5513] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="76820ce8861e79e6438415fa539ba267993ef0340b42ab0b44b92e5ac8ce1efb" Namespace="calico-system" Pod="calico-kube-controllers-85c7d9d48b-hc6qj" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--kube--controllers--85c7d9d48b--hc6qj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--dc3ed46bb5-k8s-calico--kube--controllers--85c7d9d48b--hc6qj-eth0", GenerateName:"calico-kube-controllers-85c7d9d48b-", Namespace:"calico-system", SelfLink:"", UID:"e0164474-95e7-4b01-988d-4ae10762d8d3", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 23, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85c7d9d48b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-dc3ed46bb5", ContainerID:"76820ce8861e79e6438415fa539ba267993ef0340b42ab0b44b92e5ac8ce1efb", Pod:"calico-kube-controllers-85c7d9d48b-hc6qj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.98.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0b0315b3d18", MAC:"fa:5e:a7:e6:65:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:24:18.769985 containerd[2508]: 2025-12-16 03:24:18.765 [INFO][5513] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="76820ce8861e79e6438415fa539ba267993ef0340b42ab0b44b92e5ac8ce1efb" Namespace="calico-system" Pod="calico-kube-controllers-85c7d9d48b-hc6qj" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--kube--controllers--85c7d9d48b--hc6qj-eth0" Dec 16 03:24:18.791000 audit[5850]: NETFILTER_CFG table=filter:142 family=2 entries=52 op=nft_register_chain pid=5850 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:24:18.791000 audit[5850]: SYSCALL arch=c000003e syscall=46 success=yes exit=24328 a0=3 a1=7ffc069fec40 a2=0 a3=7ffc069fec2c items=0 ppid=5172 pid=5850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:18.791000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:24:18.811842 containerd[2508]: time="2025-12-16T03:24:18.811807689Z" level=info msg="connecting to shim 76820ce8861e79e6438415fa539ba267993ef0340b42ab0b44b92e5ac8ce1efb" address="unix:///run/containerd/s/5adabab264555e1ebb22063dc14cf4ffa6faba7d44e54fe498c98b32c910e600" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:24:18.835327 systemd[1]: Started cri-containerd-76820ce8861e79e6438415fa539ba267993ef0340b42ab0b44b92e5ac8ce1efb.scope - libcontainer container 76820ce8861e79e6438415fa539ba267993ef0340b42ab0b44b92e5ac8ce1efb. Dec 16 03:24:18.842000 audit: BPF prog-id=260 op=LOAD Dec 16 03:24:18.842000 audit: BPF prog-id=261 op=LOAD Dec 16 03:24:18.842000 audit[5872]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5859 pid=5872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:18.842000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736383230636538383631653739653634333834313566613533396261 Dec 16 03:24:18.842000 audit: BPF prog-id=261 op=UNLOAD Dec 16 03:24:18.842000 audit[5872]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5859 pid=5872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:18.842000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736383230636538383631653739653634333834313566613533396261 Dec 16 03:24:18.842000 audit: BPF prog-id=262 op=LOAD Dec 16 03:24:18.842000 audit[5872]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5859 pid=5872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:18.842000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736383230636538383631653739653634333834313566613533396261 Dec 16 03:24:18.842000 audit: BPF prog-id=263 op=LOAD Dec 16 03:24:18.842000 audit[5872]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5859 pid=5872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:18.842000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736383230636538383631653739653634333834313566613533396261 Dec 16 03:24:18.842000 audit: BPF prog-id=263 op=UNLOAD Dec 16 03:24:18.842000 audit[5872]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5859 pid=5872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:18.842000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736383230636538383631653739653634333834313566613533396261 Dec 16 03:24:18.842000 audit: BPF prog-id=262 op=UNLOAD Dec 16 03:24:18.842000 audit[5872]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5859 pid=5872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:18.842000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736383230636538383631653739653634333834313566613533396261 Dec 16 03:24:18.843000 audit: BPF prog-id=264 op=LOAD Dec 16 03:24:18.843000 audit[5872]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5859 pid=5872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:18.843000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736383230636538383631653739653634333834313566613533396261 Dec 16 03:24:18.877161 containerd[2508]: time="2025-12-16T03:24:18.877110625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85c7d9d48b-hc6qj,Uid:e0164474-95e7-4b01-988d-4ae10762d8d3,Namespace:calico-system,Attempt:0,} returns sandbox id \"76820ce8861e79e6438415fa539ba267993ef0340b42ab0b44b92e5ac8ce1efb\"" Dec 16 03:24:18.878549 containerd[2508]: time="2025-12-16T03:24:18.878351592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 03:24:18.927653 systemd-networkd[2145]: cali7450390396e: Link UP Dec 16 03:24:18.929247 systemd-networkd[2145]: cali7450390396e: Gained carrier Dec 16 03:24:18.947613 containerd[2508]: 2025-12-16 03:24:16.599 [INFO][5668] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--dc3ed46bb5-k8s-coredns--674b8bbfcf--dvbkk-eth0 coredns-674b8bbfcf- kube-system b18239fa-27f6-46f2-8f55-8e660ec10a40 920 0 2025-12-16 03:23:17 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547.0.0-a-dc3ed46bb5 coredns-674b8bbfcf-dvbkk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7450390396e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0349ad5c77e401ae66c017a608a1aed169baf4ac28e27c077e0ac2e7a0191159" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvbkk" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-coredns--674b8bbfcf--dvbkk-" Dec 16 03:24:18.947613 containerd[2508]: 2025-12-16 03:24:16.599 [INFO][5668] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0349ad5c77e401ae66c017a608a1aed169baf4ac28e27c077e0ac2e7a0191159" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvbkk" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-coredns--674b8bbfcf--dvbkk-eth0" Dec 16 03:24:18.947613 containerd[2508]: 2025-12-16 03:24:16.878 [INFO][5746] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0349ad5c77e401ae66c017a608a1aed169baf4ac28e27c077e0ac2e7a0191159" HandleID="k8s-pod-network.0349ad5c77e401ae66c017a608a1aed169baf4ac28e27c077e0ac2e7a0191159" Workload="ci--4547.0.0--a--dc3ed46bb5-k8s-coredns--674b8bbfcf--dvbkk-eth0" Dec 16 03:24:18.947613 containerd[2508]: 2025-12-16 03:24:16.878 [INFO][5746] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0349ad5c77e401ae66c017a608a1aed169baf4ac28e27c077e0ac2e7a0191159" HandleID="k8s-pod-network.0349ad5c77e401ae66c017a608a1aed169baf4ac28e27c077e0ac2e7a0191159" Workload="ci--4547.0.0--a--dc3ed46bb5-k8s-coredns--674b8bbfcf--dvbkk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547.0.0-a-dc3ed46bb5", "pod":"coredns-674b8bbfcf-dvbkk", "timestamp":"2025-12-16 03:24:16.878768073 +0000 UTC"}, Hostname:"ci-4547.0.0-a-dc3ed46bb5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:24:18.947613 containerd[2508]: 2025-12-16 03:24:16.879 [INFO][5746] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:24:18.947613 containerd[2508]: 2025-12-16 03:24:18.689 [INFO][5746] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:24:18.947613 containerd[2508]: 2025-12-16 03:24:18.689 [INFO][5746] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-dc3ed46bb5' Dec 16 03:24:18.947613 containerd[2508]: 2025-12-16 03:24:18.705 [INFO][5746] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0349ad5c77e401ae66c017a608a1aed169baf4ac28e27c077e0ac2e7a0191159" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:18.947613 containerd[2508]: 2025-12-16 03:24:18.711 [INFO][5746] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:18.947613 containerd[2508]: 2025-12-16 03:24:18.766 [INFO][5746] ipam/ipam.go 511: Trying affinity for 192.168.98.128/26 host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:18.947613 containerd[2508]: 2025-12-16 03:24:18.772 [INFO][5746] ipam/ipam.go 158: Attempting to load block cidr=192.168.98.128/26 host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:18.947613 containerd[2508]: 2025-12-16 03:24:18.775 [INFO][5746] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.98.128/26 host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:18.947613 containerd[2508]: 2025-12-16 03:24:18.776 [INFO][5746] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.98.128/26 handle="k8s-pod-network.0349ad5c77e401ae66c017a608a1aed169baf4ac28e27c077e0ac2e7a0191159" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:18.947613 containerd[2508]: 2025-12-16 03:24:18.778 [INFO][5746] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0349ad5c77e401ae66c017a608a1aed169baf4ac28e27c077e0ac2e7a0191159 Dec 16 03:24:18.947613 containerd[2508]: 2025-12-16 03:24:18.903 [INFO][5746] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.98.128/26 handle="k8s-pod-network.0349ad5c77e401ae66c017a608a1aed169baf4ac28e27c077e0ac2e7a0191159" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:18.947613 containerd[2508]: 2025-12-16 03:24:18.916 [INFO][5746] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.98.135/26] block=192.168.98.128/26 handle="k8s-pod-network.0349ad5c77e401ae66c017a608a1aed169baf4ac28e27c077e0ac2e7a0191159" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:18.947613 containerd[2508]: 2025-12-16 03:24:18.916 [INFO][5746] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.98.135/26] handle="k8s-pod-network.0349ad5c77e401ae66c017a608a1aed169baf4ac28e27c077e0ac2e7a0191159" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:18.947613 containerd[2508]: 2025-12-16 03:24:18.917 [INFO][5746] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:24:18.947613 containerd[2508]: 2025-12-16 03:24:18.917 [INFO][5746] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.98.135/26] IPv6=[] ContainerID="0349ad5c77e401ae66c017a608a1aed169baf4ac28e27c077e0ac2e7a0191159" HandleID="k8s-pod-network.0349ad5c77e401ae66c017a608a1aed169baf4ac28e27c077e0ac2e7a0191159" Workload="ci--4547.0.0--a--dc3ed46bb5-k8s-coredns--674b8bbfcf--dvbkk-eth0" Dec 16 03:24:18.949198 containerd[2508]: 2025-12-16 03:24:18.923 [INFO][5668] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0349ad5c77e401ae66c017a608a1aed169baf4ac28e27c077e0ac2e7a0191159" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvbkk" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-coredns--674b8bbfcf--dvbkk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--dc3ed46bb5-k8s-coredns--674b8bbfcf--dvbkk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b18239fa-27f6-46f2-8f55-8e660ec10a40", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 23, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-dc3ed46bb5", ContainerID:"", Pod:"coredns-674b8bbfcf-dvbkk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.98.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7450390396e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:24:18.949198 containerd[2508]: 2025-12-16 03:24:18.923 [INFO][5668] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.135/32] ContainerID="0349ad5c77e401ae66c017a608a1aed169baf4ac28e27c077e0ac2e7a0191159" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvbkk" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-coredns--674b8bbfcf--dvbkk-eth0" Dec 16 03:24:18.949198 containerd[2508]: 2025-12-16 03:24:18.923 [INFO][5668] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7450390396e ContainerID="0349ad5c77e401ae66c017a608a1aed169baf4ac28e27c077e0ac2e7a0191159" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvbkk" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-coredns--674b8bbfcf--dvbkk-eth0" Dec 16 03:24:18.949198 containerd[2508]: 2025-12-16 03:24:18.926 [INFO][5668] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0349ad5c77e401ae66c017a608a1aed169baf4ac28e27c077e0ac2e7a0191159" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvbkk" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-coredns--674b8bbfcf--dvbkk-eth0" Dec 16 03:24:18.949198 containerd[2508]: 2025-12-16 03:24:18.926 [INFO][5668] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0349ad5c77e401ae66c017a608a1aed169baf4ac28e27c077e0ac2e7a0191159" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvbkk" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-coredns--674b8bbfcf--dvbkk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--dc3ed46bb5-k8s-coredns--674b8bbfcf--dvbkk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b18239fa-27f6-46f2-8f55-8e660ec10a40", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 23, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-dc3ed46bb5", ContainerID:"0349ad5c77e401ae66c017a608a1aed169baf4ac28e27c077e0ac2e7a0191159", Pod:"coredns-674b8bbfcf-dvbkk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.98.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7450390396e", MAC:"ae:72:48:de:58:45", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:24:18.949198 containerd[2508]: 2025-12-16 03:24:18.944 [INFO][5668] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0349ad5c77e401ae66c017a608a1aed169baf4ac28e27c077e0ac2e7a0191159" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvbkk" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-coredns--674b8bbfcf--dvbkk-eth0" Dec 16 03:24:18.972000 audit[5907]: NETFILTER_CFG table=filter:143 family=2 entries=58 op=nft_register_chain pid=5907 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:24:18.972000 audit[5907]: SYSCALL arch=c000003e syscall=46 success=yes exit=26760 a0=3 a1=7ffd48e96fe0 a2=0 a3=7ffd48e96fcc items=0 ppid=5172 pid=5907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:18.972000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:24:18.990763 containerd[2508]: time="2025-12-16T03:24:18.990680630Z" level=info msg="connecting to shim 0349ad5c77e401ae66c017a608a1aed169baf4ac28e27c077e0ac2e7a0191159" address="unix:///run/containerd/s/1e0c1679780672d6b86709199299f666d802fa2bfa3293b188438a47f2d8df85" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:24:19.010359 systemd[1]: Started cri-containerd-0349ad5c77e401ae66c017a608a1aed169baf4ac28e27c077e0ac2e7a0191159.scope - libcontainer container 0349ad5c77e401ae66c017a608a1aed169baf4ac28e27c077e0ac2e7a0191159. Dec 16 03:24:19.018000 audit: BPF prog-id=265 op=LOAD Dec 16 03:24:19.018000 audit: BPF prog-id=266 op=LOAD Dec 16 03:24:19.018000 audit[5927]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=5915 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:19.018000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033343961643563373765343031616536366330313761363038613161 Dec 16 03:24:19.018000 audit: BPF prog-id=266 op=UNLOAD Dec 16 03:24:19.018000 audit[5927]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5915 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:19.018000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033343961643563373765343031616536366330313761363038613161 Dec 16 03:24:19.018000 audit: BPF prog-id=267 op=LOAD Dec 16 03:24:19.018000 audit[5927]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=5915 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:19.018000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033343961643563373765343031616536366330313761363038613161 Dec 16 03:24:19.018000 audit: BPF prog-id=268 op=LOAD Dec 16 03:24:19.018000 audit[5927]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=5915 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:19.018000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033343961643563373765343031616536366330313761363038613161 Dec 16 03:24:19.018000 audit: BPF prog-id=268 op=UNLOAD Dec 16 03:24:19.018000 audit[5927]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5915 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:19.018000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033343961643563373765343031616536366330313761363038613161 Dec 16 03:24:19.018000 audit: BPF prog-id=267 op=UNLOAD Dec 16 03:24:19.018000 audit[5927]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5915 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:19.018000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033343961643563373765343031616536366330313761363038613161 Dec 16 03:24:19.018000 audit: BPF prog-id=269 op=LOAD Dec 16 03:24:19.018000 audit[5927]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=5915 pid=5927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:19.018000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033343961643563373765343031616536366330313761363038613161 Dec 16 03:24:19.054864 containerd[2508]: time="2025-12-16T03:24:19.054835553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dvbkk,Uid:b18239fa-27f6-46f2-8f55-8e660ec10a40,Namespace:kube-system,Attempt:0,} returns sandbox id \"0349ad5c77e401ae66c017a608a1aed169baf4ac28e27c077e0ac2e7a0191159\"" Dec 16 03:24:19.061848 containerd[2508]: time="2025-12-16T03:24:19.061825244Z" level=info msg="CreateContainer within sandbox \"0349ad5c77e401ae66c017a608a1aed169baf4ac28e27c077e0ac2e7a0191159\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 03:24:19.077599 containerd[2508]: time="2025-12-16T03:24:19.077569857Z" level=info msg="Container 8380bf0d9da461e4931df60d40ecc3b6222e6c0cf90cde9b6eaba13a140a7701: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:24:19.086979 containerd[2508]: time="2025-12-16T03:24:19.086909184Z" level=info msg="CreateContainer within sandbox \"0349ad5c77e401ae66c017a608a1aed169baf4ac28e27c077e0ac2e7a0191159\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8380bf0d9da461e4931df60d40ecc3b6222e6c0cf90cde9b6eaba13a140a7701\"" Dec 16 03:24:19.088752 containerd[2508]: time="2025-12-16T03:24:19.088728326Z" level=info msg="StartContainer for \"8380bf0d9da461e4931df60d40ecc3b6222e6c0cf90cde9b6eaba13a140a7701\"" Dec 16 03:24:19.089481 containerd[2508]: time="2025-12-16T03:24:19.089447131Z" level=info msg="connecting to shim 8380bf0d9da461e4931df60d40ecc3b6222e6c0cf90cde9b6eaba13a140a7701" address="unix:///run/containerd/s/1e0c1679780672d6b86709199299f666d802fa2bfa3293b188438a47f2d8df85" protocol=ttrpc version=3 Dec 16 03:24:19.106930 systemd[1]: Started cri-containerd-8380bf0d9da461e4931df60d40ecc3b6222e6c0cf90cde9b6eaba13a140a7701.scope - libcontainer container 8380bf0d9da461e4931df60d40ecc3b6222e6c0cf90cde9b6eaba13a140a7701. Dec 16 03:24:19.116000 audit: BPF prog-id=270 op=LOAD Dec 16 03:24:19.116000 audit: BPF prog-id=271 op=LOAD Dec 16 03:24:19.116000 audit[5952]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5915 pid=5952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:19.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833383062663064396461343631653439333164663630643430656363 Dec 16 03:24:19.116000 audit: BPF prog-id=271 op=UNLOAD Dec 16 03:24:19.116000 audit[5952]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5915 pid=5952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:19.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833383062663064396461343631653439333164663630643430656363 Dec 16 03:24:19.116000 audit: BPF prog-id=272 op=LOAD Dec 16 03:24:19.116000 audit[5952]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5915 pid=5952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:19.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833383062663064396461343631653439333164663630643430656363 Dec 16 03:24:19.116000 audit: BPF prog-id=273 op=LOAD Dec 16 03:24:19.116000 audit[5952]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5915 pid=5952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:19.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833383062663064396461343631653439333164663630643430656363 Dec 16 03:24:19.116000 audit: BPF prog-id=273 op=UNLOAD Dec 16 03:24:19.116000 audit[5952]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5915 pid=5952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:19.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833383062663064396461343631653439333164663630643430656363 Dec 16 03:24:19.116000 audit: BPF prog-id=272 op=UNLOAD Dec 16 03:24:19.116000 audit[5952]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5915 pid=5952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:19.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833383062663064396461343631653439333164663630643430656363 Dec 16 03:24:19.116000 audit: BPF prog-id=274 op=LOAD Dec 16 03:24:19.116000 audit[5952]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5915 pid=5952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:19.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833383062663064396461343631653439333164663630643430656363 Dec 16 03:24:19.119221 containerd[2508]: time="2025-12-16T03:24:19.118848566Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:24:19.121357 containerd[2508]: time="2025-12-16T03:24:19.121322193Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 03:24:19.121493 containerd[2508]: time="2025-12-16T03:24:19.121325562Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 03:24:19.121942 kubelet[3995]: E1216 03:24:19.121840 3995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:24:19.121942 kubelet[3995]: E1216 03:24:19.121881 3995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:24:19.122590 kubelet[3995]: E1216 03:24:19.122519 3995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fzdg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-85c7d9d48b-hc6qj_calico-system(e0164474-95e7-4b01-988d-4ae10762d8d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 03:24:19.123842 kubelet[3995]: E1216 03:24:19.123797 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85c7d9d48b-hc6qj" podUID="e0164474-95e7-4b01-988d-4ae10762d8d3" Dec 16 03:24:19.134254 containerd[2508]: time="2025-12-16T03:24:19.134171118Z" level=info msg="StartContainer for \"8380bf0d9da461e4931df60d40ecc3b6222e6c0cf90cde9b6eaba13a140a7701\" returns successfully" Dec 16 03:24:19.167376 systemd-networkd[2145]: calif723ec48ef3: Link UP Dec 16 03:24:19.169063 systemd-networkd[2145]: calif723ec48ef3: Gained carrier Dec 16 03:24:19.412871 containerd[2508]: 2025-12-16 03:24:16.597 [INFO][5665] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--86474dbd54--65v57-eth0 calico-apiserver-86474dbd54- calico-apiserver 928c764d-cf1a-4e24-874a-b4bd241b86e5 915 0 2025-12-16 03:23:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86474dbd54 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547.0.0-a-dc3ed46bb5 calico-apiserver-86474dbd54-65v57 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif723ec48ef3 [] [] }} ContainerID="c71a4ddc328755b6d1653084c15a6e08f460d4a1526ca1d337b34c4a9571a975" Namespace="calico-apiserver" Pod="calico-apiserver-86474dbd54-65v57" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--86474dbd54--65v57-" Dec 16 03:24:19.412871 containerd[2508]: 2025-12-16 03:24:16.601 [INFO][5665] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c71a4ddc328755b6d1653084c15a6e08f460d4a1526ca1d337b34c4a9571a975" Namespace="calico-apiserver" Pod="calico-apiserver-86474dbd54-65v57" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--86474dbd54--65v57-eth0" Dec 16 03:24:19.412871 containerd[2508]: 2025-12-16 03:24:16.881 [INFO][5744] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c71a4ddc328755b6d1653084c15a6e08f460d4a1526ca1d337b34c4a9571a975" HandleID="k8s-pod-network.c71a4ddc328755b6d1653084c15a6e08f460d4a1526ca1d337b34c4a9571a975" Workload="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--86474dbd54--65v57-eth0" Dec 16 03:24:19.412871 containerd[2508]: 2025-12-16 03:24:16.881 [INFO][5744] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c71a4ddc328755b6d1653084c15a6e08f460d4a1526ca1d337b34c4a9571a975" HandleID="k8s-pod-network.c71a4ddc328755b6d1653084c15a6e08f460d4a1526ca1d337b34c4a9571a975" Workload="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--86474dbd54--65v57-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547.0.0-a-dc3ed46bb5", "pod":"calico-apiserver-86474dbd54-65v57", "timestamp":"2025-12-16 03:24:16.881474959 +0000 UTC"}, Hostname:"ci-4547.0.0-a-dc3ed46bb5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:24:19.412871 containerd[2508]: 2025-12-16 03:24:16.881 [INFO][5744] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:24:19.412871 containerd[2508]: 2025-12-16 03:24:18.917 [INFO][5744] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:24:19.412871 containerd[2508]: 2025-12-16 03:24:18.917 [INFO][5744] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-dc3ed46bb5' Dec 16 03:24:19.412871 containerd[2508]: 2025-12-16 03:24:18.930 [INFO][5744] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c71a4ddc328755b6d1653084c15a6e08f460d4a1526ca1d337b34c4a9571a975" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:19.412871 containerd[2508]: 2025-12-16 03:24:18.939 [INFO][5744] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:19.412871 containerd[2508]: 2025-12-16 03:24:18.947 [INFO][5744] ipam/ipam.go 511: Trying affinity for 192.168.98.128/26 host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:19.412871 containerd[2508]: 2025-12-16 03:24:18.950 [INFO][5744] ipam/ipam.go 158: Attempting to load block cidr=192.168.98.128/26 host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:19.412871 containerd[2508]: 2025-12-16 03:24:18.954 [INFO][5744] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.98.128/26 host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:19.412871 containerd[2508]: 2025-12-16 03:24:18.954 [INFO][5744] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.98.128/26 handle="k8s-pod-network.c71a4ddc328755b6d1653084c15a6e08f460d4a1526ca1d337b34c4a9571a975" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:19.412871 containerd[2508]: 2025-12-16 03:24:18.956 [INFO][5744] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c71a4ddc328755b6d1653084c15a6e08f460d4a1526ca1d337b34c4a9571a975 Dec 16 03:24:19.412871 containerd[2508]: 2025-12-16 03:24:18.962 [INFO][5744] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.98.128/26 handle="k8s-pod-network.c71a4ddc328755b6d1653084c15a6e08f460d4a1526ca1d337b34c4a9571a975" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:19.412871 containerd[2508]: 2025-12-16 03:24:19.159 [INFO][5744] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.98.136/26] block=192.168.98.128/26 handle="k8s-pod-network.c71a4ddc328755b6d1653084c15a6e08f460d4a1526ca1d337b34c4a9571a975" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:19.412871 containerd[2508]: 2025-12-16 03:24:19.159 [INFO][5744] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.98.136/26] handle="k8s-pod-network.c71a4ddc328755b6d1653084c15a6e08f460d4a1526ca1d337b34c4a9571a975" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:19.412871 containerd[2508]: 2025-12-16 03:24:19.159 [INFO][5744] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:24:19.412871 containerd[2508]: 2025-12-16 03:24:19.159 [INFO][5744] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.98.136/26] IPv6=[] ContainerID="c71a4ddc328755b6d1653084c15a6e08f460d4a1526ca1d337b34c4a9571a975" HandleID="k8s-pod-network.c71a4ddc328755b6d1653084c15a6e08f460d4a1526ca1d337b34c4a9571a975" Workload="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--86474dbd54--65v57-eth0" Dec 16 03:24:19.415348 containerd[2508]: 2025-12-16 03:24:19.161 [INFO][5665] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c71a4ddc328755b6d1653084c15a6e08f460d4a1526ca1d337b34c4a9571a975" Namespace="calico-apiserver" Pod="calico-apiserver-86474dbd54-65v57" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--86474dbd54--65v57-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--86474dbd54--65v57-eth0", GenerateName:"calico-apiserver-86474dbd54-", Namespace:"calico-apiserver", SelfLink:"", UID:"928c764d-cf1a-4e24-874a-b4bd241b86e5", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 23, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86474dbd54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-dc3ed46bb5", ContainerID:"", Pod:"calico-apiserver-86474dbd54-65v57", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif723ec48ef3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:24:19.415348 containerd[2508]: 2025-12-16 03:24:19.161 [INFO][5665] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.136/32] ContainerID="c71a4ddc328755b6d1653084c15a6e08f460d4a1526ca1d337b34c4a9571a975" Namespace="calico-apiserver" Pod="calico-apiserver-86474dbd54-65v57" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--86474dbd54--65v57-eth0" Dec 16 03:24:19.415348 containerd[2508]: 2025-12-16 03:24:19.161 [INFO][5665] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif723ec48ef3 ContainerID="c71a4ddc328755b6d1653084c15a6e08f460d4a1526ca1d337b34c4a9571a975" Namespace="calico-apiserver" Pod="calico-apiserver-86474dbd54-65v57" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--86474dbd54--65v57-eth0" Dec 16 03:24:19.415348 containerd[2508]: 2025-12-16 03:24:19.171 [INFO][5665] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c71a4ddc328755b6d1653084c15a6e08f460d4a1526ca1d337b34c4a9571a975" Namespace="calico-apiserver" Pod="calico-apiserver-86474dbd54-65v57" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--86474dbd54--65v57-eth0" Dec 16 03:24:19.415348 containerd[2508]: 2025-12-16 03:24:19.173 [INFO][5665] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c71a4ddc328755b6d1653084c15a6e08f460d4a1526ca1d337b34c4a9571a975" Namespace="calico-apiserver" Pod="calico-apiserver-86474dbd54-65v57" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--86474dbd54--65v57-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--86474dbd54--65v57-eth0", GenerateName:"calico-apiserver-86474dbd54-", Namespace:"calico-apiserver", SelfLink:"", UID:"928c764d-cf1a-4e24-874a-b4bd241b86e5", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 23, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86474dbd54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-dc3ed46bb5", ContainerID:"c71a4ddc328755b6d1653084c15a6e08f460d4a1526ca1d337b34c4a9571a975", Pod:"calico-apiserver-86474dbd54-65v57", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif723ec48ef3", MAC:"72:07:7e:23:8d:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:24:19.415348 containerd[2508]: 2025-12-16 03:24:19.409 [INFO][5665] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c71a4ddc328755b6d1653084c15a6e08f460d4a1526ca1d337b34c4a9571a975" Namespace="calico-apiserver" Pod="calico-apiserver-86474dbd54-65v57" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--86474dbd54--65v57-eth0" Dec 16 03:24:19.428000 audit[5992]: NETFILTER_CFG table=filter:144 family=2 entries=63 op=nft_register_chain pid=5992 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:24:19.428000 audit[5992]: SYSCALL arch=c000003e syscall=46 success=yes exit=30664 a0=3 a1=7ffd773bf1c0 a2=0 a3=7ffd773bf1ac items=0 ppid=5172 pid=5992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:19.428000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:24:19.457917 containerd[2508]: time="2025-12-16T03:24:19.457861861Z" level=info msg="connecting to shim c71a4ddc328755b6d1653084c15a6e08f460d4a1526ca1d337b34c4a9571a975" address="unix:///run/containerd/s/bf11a17bc6bb80526f47810ccd70060f0c550ab5150464bc1bf11127afae7648" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:24:19.488345 systemd[1]: Started cri-containerd-c71a4ddc328755b6d1653084c15a6e08f460d4a1526ca1d337b34c4a9571a975.scope - libcontainer container c71a4ddc328755b6d1653084c15a6e08f460d4a1526ca1d337b34c4a9571a975. Dec 16 03:24:19.495000 audit: BPF prog-id=275 op=LOAD Dec 16 03:24:19.496000 audit: BPF prog-id=276 op=LOAD Dec 16 03:24:19.496000 audit[6015]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=6002 pid=6015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:19.496000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337316134646463333238373535623664313635333038346331356136 Dec 16 03:24:19.496000 audit: BPF prog-id=276 op=UNLOAD Dec 16 03:24:19.496000 audit[6015]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6002 pid=6015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:19.496000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337316134646463333238373535623664313635333038346331356136 Dec 16 03:24:19.496000 audit: BPF prog-id=277 op=LOAD Dec 16 03:24:19.496000 audit[6015]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=6002 pid=6015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:19.496000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337316134646463333238373535623664313635333038346331356136 Dec 16 03:24:19.496000 audit: BPF prog-id=278 op=LOAD Dec 16 03:24:19.496000 audit[6015]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=6002 pid=6015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:19.496000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337316134646463333238373535623664313635333038346331356136 Dec 16 03:24:19.496000 audit: BPF prog-id=278 op=UNLOAD Dec 16 03:24:19.496000 audit[6015]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=6002 pid=6015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:19.496000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337316134646463333238373535623664313635333038346331356136 Dec 16 03:24:19.496000 audit: BPF prog-id=277 op=UNLOAD Dec 16 03:24:19.496000 audit[6015]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=6002 pid=6015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:19.496000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337316134646463333238373535623664313635333038346331356136 Dec 16 03:24:19.497000 audit: BPF prog-id=279 op=LOAD Dec 16 03:24:19.497000 audit[6015]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=6002 pid=6015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:19.497000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337316134646463333238373535623664313635333038346331356136 Dec 16 03:24:19.528189 containerd[2508]: time="2025-12-16T03:24:19.528158448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86474dbd54-65v57,Uid:928c764d-cf1a-4e24-874a-b4bd241b86e5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c71a4ddc328755b6d1653084c15a6e08f460d4a1526ca1d337b34c4a9571a975\"" Dec 16 03:24:19.529255 containerd[2508]: time="2025-12-16T03:24:19.529230433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:24:19.542597 kubelet[3995]: E1216 03:24:19.542531 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-284xb" podUID="17fccc4a-a08c-4495-a01b-bad3cd3eab43" Dec 16 03:24:19.542872 kubelet[3995]: E1216 03:24:19.542712 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85c7d9d48b-hc6qj" podUID="e0164474-95e7-4b01-988d-4ae10762d8d3" Dec 16 03:24:19.641348 systemd-networkd[2145]: cali75561621f8a: Gained IPv6LL Dec 16 03:24:19.770639 containerd[2508]: time="2025-12-16T03:24:19.770537075Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:24:19.773290 containerd[2508]: time="2025-12-16T03:24:19.773247947Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:24:19.773399 containerd[2508]: time="2025-12-16T03:24:19.773336269Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:24:19.773569 kubelet[3995]: E1216 03:24:19.773536 3995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:24:19.773865 kubelet[3995]: E1216 03:24:19.773581 3995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:24:19.773865 kubelet[3995]: E1216 03:24:19.773720 3995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqh2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-86474dbd54-65v57_calico-apiserver(928c764d-cf1a-4e24-874a-b4bd241b86e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:24:19.774879 kubelet[3995]: E1216 03:24:19.774855 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86474dbd54-65v57" podUID="928c764d-cf1a-4e24-874a-b4bd241b86e5" Dec 16 03:24:19.818000 audit[6044]: NETFILTER_CFG table=filter:145 family=2 entries=14 op=nft_register_rule pid=6044 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:24:19.818000 audit[6044]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc64246d00 a2=0 a3=7ffc64246cec items=0 ppid=4100 pid=6044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:19.818000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:24:19.828000 audit[6044]: NETFILTER_CFG table=nat:146 family=2 entries=44 op=nft_register_rule pid=6044 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:24:19.828000 audit[6044]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc64246d00 a2=0 a3=7ffc64246cec items=0 ppid=4100 pid=6044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:19.828000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:24:20.012388 kubelet[3995]: I1216 03:24:20.011475 3995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dvbkk" podStartSLOduration=63.011459104 podStartE2EDuration="1m3.011459104s" podCreationTimestamp="2025-12-16 03:23:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:24:19.707752906 +0000 UTC m=+69.440095359" watchObservedRunningTime="2025-12-16 03:24:20.011459104 +0000 UTC m=+69.743801550" Dec 16 03:24:20.026262 systemd-networkd[2145]: cali7450390396e: Gained IPv6LL Dec 16 03:24:20.345270 systemd-networkd[2145]: cali0b0315b3d18: Gained IPv6LL Dec 16 03:24:20.419000 audit[6047]: NETFILTER_CFG table=filter:147 family=2 entries=14 op=nft_register_rule pid=6047 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:24:20.419000 audit[6047]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcf8fdb370 a2=0 a3=7ffcf8fdb35c items=0 ppid=4100 pid=6047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:20.419000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:24:20.429000 audit[6047]: NETFILTER_CFG table=nat:148 family=2 entries=56 op=nft_register_chain pid=6047 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:24:20.429000 audit[6047]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffcf8fdb370 a2=0 a3=7ffcf8fdb35c items=0 ppid=4100 pid=6047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:20.429000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:24:20.537592 systemd-networkd[2145]: calif723ec48ef3: Gained IPv6LL Dec 16 03:24:20.544333 kubelet[3995]: E1216 03:24:20.544297 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86474dbd54-65v57" podUID="928c764d-cf1a-4e24-874a-b4bd241b86e5" Dec 16 03:24:20.544501 kubelet[3995]: E1216 03:24:20.544479 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85c7d9d48b-hc6qj" podUID="e0164474-95e7-4b01-988d-4ae10762d8d3" Dec 16 03:24:20.914645 systemd-networkd[2145]: cali56aac67fd0b: Link UP Dec 16 03:24:20.916337 systemd-networkd[2145]: cali56aac67fd0b: Gained carrier Dec 16 03:24:20.959996 containerd[2508]: 2025-12-16 03:24:17.911 [INFO][5759] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--86474dbd54--fphkv-eth0 calico-apiserver-86474dbd54- calico-apiserver b0a716ce-6354-47ff-896b-1da783a25f3a 926 0 2025-12-16 03:23:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86474dbd54 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547.0.0-a-dc3ed46bb5 calico-apiserver-86474dbd54-fphkv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali56aac67fd0b [] [] }} ContainerID="da4c380f81b6a2a482e542a4870ac60d29853a1f50137c4c16664a08fba76e58" Namespace="calico-apiserver" Pod="calico-apiserver-86474dbd54-fphkv" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--86474dbd54--fphkv-" Dec 16 03:24:20.959996 containerd[2508]: 2025-12-16 03:24:17.912 [INFO][5759] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="da4c380f81b6a2a482e542a4870ac60d29853a1f50137c4c16664a08fba76e58" Namespace="calico-apiserver" Pod="calico-apiserver-86474dbd54-fphkv" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--86474dbd54--fphkv-eth0" Dec 16 03:24:20.959996 containerd[2508]: 2025-12-16 03:24:18.249 [INFO][5789] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="da4c380f81b6a2a482e542a4870ac60d29853a1f50137c4c16664a08fba76e58" HandleID="k8s-pod-network.da4c380f81b6a2a482e542a4870ac60d29853a1f50137c4c16664a08fba76e58" Workload="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--86474dbd54--fphkv-eth0" Dec 16 03:24:20.959996 containerd[2508]: 2025-12-16 03:24:18.249 [INFO][5789] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="da4c380f81b6a2a482e542a4870ac60d29853a1f50137c4c16664a08fba76e58" HandleID="k8s-pod-network.da4c380f81b6a2a482e542a4870ac60d29853a1f50137c4c16664a08fba76e58" Workload="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--86474dbd54--fphkv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad6d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547.0.0-a-dc3ed46bb5", "pod":"calico-apiserver-86474dbd54-fphkv", "timestamp":"2025-12-16 03:24:18.249050866 +0000 UTC"}, Hostname:"ci-4547.0.0-a-dc3ed46bb5", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:24:20.959996 containerd[2508]: 2025-12-16 03:24:18.250 [INFO][5789] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:24:20.959996 containerd[2508]: 2025-12-16 03:24:19.159 [INFO][5789] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:24:20.959996 containerd[2508]: 2025-12-16 03:24:19.160 [INFO][5789] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547.0.0-a-dc3ed46bb5' Dec 16 03:24:20.959996 containerd[2508]: 2025-12-16 03:24:19.414 [INFO][5789] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.da4c380f81b6a2a482e542a4870ac60d29853a1f50137c4c16664a08fba76e58" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:20.959996 containerd[2508]: 2025-12-16 03:24:19.707 [INFO][5789] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:20.959996 containerd[2508]: 2025-12-16 03:24:19.806 [INFO][5789] ipam/ipam.go 511: Trying affinity for 192.168.98.128/26 host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:20.959996 containerd[2508]: 2025-12-16 03:24:19.956 [INFO][5789] ipam/ipam.go 158: Attempting to load block cidr=192.168.98.128/26 host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:20.959996 containerd[2508]: 2025-12-16 03:24:20.015 [INFO][5789] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.98.128/26 host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:20.959996 containerd[2508]: 2025-12-16 03:24:20.015 [INFO][5789] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.98.128/26 handle="k8s-pod-network.da4c380f81b6a2a482e542a4870ac60d29853a1f50137c4c16664a08fba76e58" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:20.959996 containerd[2508]: 2025-12-16 03:24:20.411 [INFO][5789] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.da4c380f81b6a2a482e542a4870ac60d29853a1f50137c4c16664a08fba76e58 Dec 16 03:24:20.959996 containerd[2508]: 2025-12-16 03:24:20.869 [INFO][5789] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.98.128/26 handle="k8s-pod-network.da4c380f81b6a2a482e542a4870ac60d29853a1f50137c4c16664a08fba76e58" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:20.959996 containerd[2508]: 2025-12-16 03:24:20.907 [INFO][5789] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.98.137/26] block=192.168.98.128/26 handle="k8s-pod-network.da4c380f81b6a2a482e542a4870ac60d29853a1f50137c4c16664a08fba76e58" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:20.959996 containerd[2508]: 2025-12-16 03:24:20.908 [INFO][5789] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.98.137/26] handle="k8s-pod-network.da4c380f81b6a2a482e542a4870ac60d29853a1f50137c4c16664a08fba76e58" host="ci-4547.0.0-a-dc3ed46bb5" Dec 16 03:24:20.959996 containerd[2508]: 2025-12-16 03:24:20.908 [INFO][5789] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:24:20.959996 containerd[2508]: 2025-12-16 03:24:20.908 [INFO][5789] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.98.137/26] IPv6=[] ContainerID="da4c380f81b6a2a482e542a4870ac60d29853a1f50137c4c16664a08fba76e58" HandleID="k8s-pod-network.da4c380f81b6a2a482e542a4870ac60d29853a1f50137c4c16664a08fba76e58" Workload="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--86474dbd54--fphkv-eth0" Dec 16 03:24:20.961408 containerd[2508]: 2025-12-16 03:24:20.909 [INFO][5759] cni-plugin/k8s.go 418: Populated endpoint ContainerID="da4c380f81b6a2a482e542a4870ac60d29853a1f50137c4c16664a08fba76e58" Namespace="calico-apiserver" Pod="calico-apiserver-86474dbd54-fphkv" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--86474dbd54--fphkv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--86474dbd54--fphkv-eth0", GenerateName:"calico-apiserver-86474dbd54-", Namespace:"calico-apiserver", SelfLink:"", UID:"b0a716ce-6354-47ff-896b-1da783a25f3a", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 23, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86474dbd54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-dc3ed46bb5", ContainerID:"", Pod:"calico-apiserver-86474dbd54-fphkv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali56aac67fd0b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:24:20.961408 containerd[2508]: 2025-12-16 03:24:20.909 [INFO][5759] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.137/32] ContainerID="da4c380f81b6a2a482e542a4870ac60d29853a1f50137c4c16664a08fba76e58" Namespace="calico-apiserver" Pod="calico-apiserver-86474dbd54-fphkv" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--86474dbd54--fphkv-eth0" Dec 16 03:24:20.961408 containerd[2508]: 2025-12-16 03:24:20.909 [INFO][5759] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali56aac67fd0b ContainerID="da4c380f81b6a2a482e542a4870ac60d29853a1f50137c4c16664a08fba76e58" Namespace="calico-apiserver" Pod="calico-apiserver-86474dbd54-fphkv" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--86474dbd54--fphkv-eth0" Dec 16 03:24:20.961408 containerd[2508]: 2025-12-16 03:24:20.917 [INFO][5759] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="da4c380f81b6a2a482e542a4870ac60d29853a1f50137c4c16664a08fba76e58" Namespace="calico-apiserver" Pod="calico-apiserver-86474dbd54-fphkv" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--86474dbd54--fphkv-eth0" Dec 16 03:24:20.961408 containerd[2508]: 2025-12-16 03:24:20.917 [INFO][5759] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="da4c380f81b6a2a482e542a4870ac60d29853a1f50137c4c16664a08fba76e58" Namespace="calico-apiserver" Pod="calico-apiserver-86474dbd54-fphkv" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--86474dbd54--fphkv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--86474dbd54--fphkv-eth0", GenerateName:"calico-apiserver-86474dbd54-", Namespace:"calico-apiserver", SelfLink:"", UID:"b0a716ce-6354-47ff-896b-1da783a25f3a", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 23, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86474dbd54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547.0.0-a-dc3ed46bb5", ContainerID:"da4c380f81b6a2a482e542a4870ac60d29853a1f50137c4c16664a08fba76e58", Pod:"calico-apiserver-86474dbd54-fphkv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali56aac67fd0b", MAC:"6e:25:7c:3a:47:ba", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:24:20.961408 containerd[2508]: 2025-12-16 03:24:20.957 [INFO][5759] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="da4c380f81b6a2a482e542a4870ac60d29853a1f50137c4c16664a08fba76e58" Namespace="calico-apiserver" Pod="calico-apiserver-86474dbd54-fphkv" WorkloadEndpoint="ci--4547.0.0--a--dc3ed46bb5-k8s-calico--apiserver--86474dbd54--fphkv-eth0" Dec 16 03:24:20.973000 audit[6058]: NETFILTER_CFG table=filter:149 family=2 entries=57 op=nft_register_chain pid=6058 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:24:20.973000 audit[6058]: SYSCALL arch=c000003e syscall=46 success=yes exit=27796 a0=3 a1=7fffc3d41e40 a2=0 a3=7fffc3d41e2c items=0 ppid=5172 pid=6058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:20.973000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:24:20.998994 containerd[2508]: time="2025-12-16T03:24:20.998943290Z" level=info msg="connecting to shim da4c380f81b6a2a482e542a4870ac60d29853a1f50137c4c16664a08fba76e58" address="unix:///run/containerd/s/951f6062cb2f36466052f3fb87a1df00875b7d0072a5c14cc35ea0b17480ab89" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:24:21.023357 systemd[1]: Started cri-containerd-da4c380f81b6a2a482e542a4870ac60d29853a1f50137c4c16664a08fba76e58.scope - libcontainer container da4c380f81b6a2a482e542a4870ac60d29853a1f50137c4c16664a08fba76e58. Dec 16 03:24:21.034000 audit: BPF prog-id=280 op=LOAD Dec 16 03:24:21.035000 audit: BPF prog-id=281 op=LOAD Dec 16 03:24:21.035000 audit[6078]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=6068 pid=6078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:21.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461346333383066383162366132613438326535343261343837306163 Dec 16 03:24:21.035000 audit: BPF prog-id=281 op=UNLOAD Dec 16 03:24:21.035000 audit[6078]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=6068 pid=6078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:21.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461346333383066383162366132613438326535343261343837306163 Dec 16 03:24:21.035000 audit: BPF prog-id=282 op=LOAD Dec 16 03:24:21.035000 audit[6078]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=6068 pid=6078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:21.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461346333383066383162366132613438326535343261343837306163 Dec 16 03:24:21.035000 audit: BPF prog-id=283 op=LOAD Dec 16 03:24:21.035000 audit[6078]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=6068 pid=6078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:21.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461346333383066383162366132613438326535343261343837306163 Dec 16 03:24:21.035000 audit: BPF prog-id=283 op=UNLOAD Dec 16 03:24:21.035000 audit[6078]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=6068 pid=6078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:21.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461346333383066383162366132613438326535343261343837306163 Dec 16 03:24:21.035000 audit: BPF prog-id=282 op=UNLOAD Dec 16 03:24:21.035000 audit[6078]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=6068 pid=6078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:21.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461346333383066383162366132613438326535343261343837306163 Dec 16 03:24:21.035000 audit: BPF prog-id=284 op=LOAD Dec 16 03:24:21.035000 audit[6078]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=6068 pid=6078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:21.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461346333383066383162366132613438326535343261343837306163 Dec 16 03:24:21.066676 containerd[2508]: time="2025-12-16T03:24:21.066598950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86474dbd54-fphkv,Uid:b0a716ce-6354-47ff-896b-1da783a25f3a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"da4c380f81b6a2a482e542a4870ac60d29853a1f50137c4c16664a08fba76e58\"" Dec 16 03:24:21.067903 containerd[2508]: time="2025-12-16T03:24:21.067879799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:24:21.314065 containerd[2508]: time="2025-12-16T03:24:21.313929560Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:24:21.316449 containerd[2508]: time="2025-12-16T03:24:21.316403081Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:24:21.316564 containerd[2508]: time="2025-12-16T03:24:21.316487026Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:24:21.316658 kubelet[3995]: E1216 03:24:21.316611 3995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:24:21.316959 kubelet[3995]: E1216 03:24:21.316670 3995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:24:21.316959 kubelet[3995]: E1216 03:24:21.316881 3995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8pwl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-86474dbd54-fphkv_calico-apiserver(b0a716ce-6354-47ff-896b-1da783a25f3a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:24:21.318056 kubelet[3995]: E1216 03:24:21.318032 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86474dbd54-fphkv" podUID="b0a716ce-6354-47ff-896b-1da783a25f3a" Dec 16 03:24:21.364000 audit[6104]: NETFILTER_CFG table=filter:150 family=2 entries=14 op=nft_register_rule pid=6104 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:24:21.364000 audit[6104]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd29109d50 a2=0 a3=7ffd29109d3c items=0 ppid=4100 pid=6104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:21.364000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:24:21.369000 audit[6104]: NETFILTER_CFG table=nat:151 family=2 entries=20 op=nft_register_rule pid=6104 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:24:21.369000 audit[6104]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd29109d50 a2=0 a3=7ffd29109d3c items=0 ppid=4100 pid=6104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:21.369000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:24:21.549291 kubelet[3995]: E1216 03:24:21.549243 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86474dbd54-fphkv" podUID="b0a716ce-6354-47ff-896b-1da783a25f3a" Dec 16 03:24:22.379000 audit[6106]: NETFILTER_CFG table=filter:152 family=2 entries=14 op=nft_register_rule pid=6106 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:24:22.379000 audit[6106]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffcb1259d0 a2=0 a3=7fffcb1259bc items=0 ppid=4100 pid=6106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:22.379000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:24:22.384000 audit[6106]: NETFILTER_CFG table=nat:153 family=2 entries=20 op=nft_register_rule pid=6106 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:24:22.384000 audit[6106]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffcb1259d0 a2=0 a3=7fffcb1259bc items=0 ppid=4100 pid=6106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:24:22.384000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:24:22.550708 kubelet[3995]: E1216 03:24:22.550661 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86474dbd54-fphkv" podUID="b0a716ce-6354-47ff-896b-1da783a25f3a" Dec 16 03:24:22.905415 systemd-networkd[2145]: cali56aac67fd0b: Gained IPv6LL Dec 16 03:24:28.348160 containerd[2508]: time="2025-12-16T03:24:28.348074892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 03:24:28.586337 containerd[2508]: time="2025-12-16T03:24:28.586291703Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:24:28.589084 containerd[2508]: time="2025-12-16T03:24:28.589058633Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 03:24:28.589084 containerd[2508]: time="2025-12-16T03:24:28.589101462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 03:24:28.589487 kubelet[3995]: E1216 03:24:28.589293 3995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:24:28.589487 kubelet[3995]: E1216 03:24:28.589333 3995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:24:28.589766 kubelet[3995]: E1216 03:24:28.589512 3995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e99e60b66e264e9fbdb6300d985b5bad,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ldgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6fd5b56957-fm9l2_calico-system(08adb93e-a5f4-4e36-9d73-5c61441c3142): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 03:24:28.590048 containerd[2508]: time="2025-12-16T03:24:28.590019876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 03:24:28.828347 containerd[2508]: time="2025-12-16T03:24:28.828310643Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:24:28.830639 containerd[2508]: time="2025-12-16T03:24:28.830614813Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 03:24:28.830724 containerd[2508]: time="2025-12-16T03:24:28.830663106Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 03:24:28.830832 kubelet[3995]: E1216 03:24:28.830775 3995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:24:28.830901 kubelet[3995]: E1216 03:24:28.830843 3995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:24:28.831090 kubelet[3995]: E1216 03:24:28.831027 3995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9ks26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-srg9b_calico-system(52f35797-5a94-4b5f-8ac7-147ca2758736): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 03:24:28.831562 containerd[2508]: time="2025-12-16T03:24:28.831363937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:24:29.071619 containerd[2508]: time="2025-12-16T03:24:29.071578691Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:24:29.074113 containerd[2508]: time="2025-12-16T03:24:29.074081414Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:24:29.074182 containerd[2508]: time="2025-12-16T03:24:29.074155022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:24:29.074304 kubelet[3995]: E1216 03:24:29.074269 3995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:24:29.074369 kubelet[3995]: E1216 03:24:29.074313 3995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:24:29.074581 kubelet[3995]: E1216 03:24:29.074529 3995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8kctb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69c4bb98b9-88qzw_calico-apiserver(453a3c95-d107-4f4e-b7f5-ee250655b168): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:24:29.074881 containerd[2508]: time="2025-12-16T03:24:29.074857157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 03:24:29.076279 kubelet[3995]: E1216 03:24:29.076235 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69c4bb98b9-88qzw" podUID="453a3c95-d107-4f4e-b7f5-ee250655b168" Dec 16 03:24:29.318012 containerd[2508]: time="2025-12-16T03:24:29.317891208Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:24:29.320724 containerd[2508]: time="2025-12-16T03:24:29.320687656Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 03:24:29.320844 containerd[2508]: time="2025-12-16T03:24:29.320774070Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 03:24:29.320938 kubelet[3995]: E1216 03:24:29.320906 3995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:24:29.320982 kubelet[3995]: E1216 03:24:29.320948 3995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:24:29.321425 containerd[2508]: time="2025-12-16T03:24:29.321233866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 03:24:29.321938 kubelet[3995]: E1216 03:24:29.321208 3995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ldgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6fd5b56957-fm9l2_calico-system(08adb93e-a5f4-4e36-9d73-5c61441c3142): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 03:24:29.323195 kubelet[3995]: E1216 03:24:29.323166 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd5b56957-fm9l2" podUID="08adb93e-a5f4-4e36-9d73-5c61441c3142" Dec 16 03:24:29.594882 containerd[2508]: time="2025-12-16T03:24:29.594840469Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:24:29.597275 containerd[2508]: time="2025-12-16T03:24:29.597228196Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 03:24:29.597372 containerd[2508]: time="2025-12-16T03:24:29.597236466Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 03:24:29.597440 kubelet[3995]: E1216 03:24:29.597410 3995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:24:29.598058 kubelet[3995]: E1216 03:24:29.597449 3995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:24:29.598058 kubelet[3995]: E1216 03:24:29.597583 3995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9ks26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-srg9b_calico-system(52f35797-5a94-4b5f-8ac7-147ca2758736): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 03:24:29.599310 kubelet[3995]: E1216 03:24:29.599268 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-srg9b" podUID="52f35797-5a94-4b5f-8ac7-147ca2758736" Dec 16 03:24:31.346707 containerd[2508]: time="2025-12-16T03:24:31.346307734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:24:31.616572 containerd[2508]: time="2025-12-16T03:24:31.616448144Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:24:31.619666 containerd[2508]: time="2025-12-16T03:24:31.619597631Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:24:31.619666 containerd[2508]: time="2025-12-16T03:24:31.619608660Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:24:31.619863 kubelet[3995]: E1216 03:24:31.619825 3995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:24:31.620253 kubelet[3995]: E1216 03:24:31.619870 3995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:24:31.620253 kubelet[3995]: E1216 03:24:31.620013 3995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqh2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-86474dbd54-65v57_calico-apiserver(928c764d-cf1a-4e24-874a-b4bd241b86e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:24:31.621224 kubelet[3995]: E1216 03:24:31.621188 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86474dbd54-65v57" podUID="928c764d-cf1a-4e24-874a-b4bd241b86e5" Dec 16 03:24:33.346712 containerd[2508]: time="2025-12-16T03:24:33.346613537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 03:24:33.602565 containerd[2508]: time="2025-12-16T03:24:33.602334513Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:24:33.604933 containerd[2508]: time="2025-12-16T03:24:33.604895281Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 03:24:33.605015 containerd[2508]: time="2025-12-16T03:24:33.604963296Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 03:24:33.605167 kubelet[3995]: E1216 03:24:33.605127 3995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:24:33.605507 kubelet[3995]: E1216 03:24:33.605181 3995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:24:33.605507 kubelet[3995]: E1216 03:24:33.605324 3995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-27fds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-284xb_calico-system(17fccc4a-a08c-4495-a01b-bad3cd3eab43): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 03:24:33.606850 kubelet[3995]: E1216 03:24:33.606799 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-284xb" podUID="17fccc4a-a08c-4495-a01b-bad3cd3eab43" Dec 16 03:24:34.347126 containerd[2508]: time="2025-12-16T03:24:34.347087632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 03:24:34.599276 containerd[2508]: time="2025-12-16T03:24:34.599120203Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:24:34.601601 containerd[2508]: time="2025-12-16T03:24:34.601568639Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 03:24:34.601799 containerd[2508]: time="2025-12-16T03:24:34.601639047Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 03:24:34.601828 kubelet[3995]: E1216 03:24:34.601745 3995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:24:34.601904 kubelet[3995]: E1216 03:24:34.601883 3995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:24:34.602170 kubelet[3995]: E1216 03:24:34.602043 3995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fzdg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-85c7d9d48b-hc6qj_calico-system(e0164474-95e7-4b01-988d-4ae10762d8d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 03:24:34.603263 kubelet[3995]: E1216 03:24:34.603212 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85c7d9d48b-hc6qj" podUID="e0164474-95e7-4b01-988d-4ae10762d8d3" Dec 16 03:24:35.346609 containerd[2508]: time="2025-12-16T03:24:35.345955882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:24:35.591545 containerd[2508]: time="2025-12-16T03:24:35.591503194Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:24:35.594354 containerd[2508]: time="2025-12-16T03:24:35.594200866Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:24:35.594354 containerd[2508]: time="2025-12-16T03:24:35.594242342Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:24:35.594585 kubelet[3995]: E1216 03:24:35.594553 3995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:24:35.594858 kubelet[3995]: E1216 03:24:35.594596 3995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:24:35.594858 kubelet[3995]: E1216 03:24:35.594744 3995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8pwl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-86474dbd54-fphkv_calico-apiserver(b0a716ce-6354-47ff-896b-1da783a25f3a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:24:35.595906 kubelet[3995]: E1216 03:24:35.595883 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86474dbd54-fphkv" podUID="b0a716ce-6354-47ff-896b-1da783a25f3a" Dec 16 03:24:40.348585 kubelet[3995]: E1216 03:24:40.348519 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69c4bb98b9-88qzw" podUID="453a3c95-d107-4f4e-b7f5-ee250655b168" Dec 16 03:24:41.347824 kubelet[3995]: E1216 03:24:41.347761 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd5b56957-fm9l2" podUID="08adb93e-a5f4-4e36-9d73-5c61441c3142" Dec 16 03:24:43.346182 kubelet[3995]: E1216 03:24:43.346105 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86474dbd54-65v57" podUID="928c764d-cf1a-4e24-874a-b4bd241b86e5" Dec 16 03:24:44.348690 kubelet[3995]: E1216 03:24:44.348480 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-srg9b" podUID="52f35797-5a94-4b5f-8ac7-147ca2758736" Dec 16 03:24:46.350803 kubelet[3995]: E1216 03:24:46.350757 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-284xb" podUID="17fccc4a-a08c-4495-a01b-bad3cd3eab43" Dec 16 03:24:46.351286 kubelet[3995]: E1216 03:24:46.350734 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85c7d9d48b-hc6qj" podUID="e0164474-95e7-4b01-988d-4ae10762d8d3" Dec 16 03:24:50.350738 kubelet[3995]: E1216 03:24:50.350610 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86474dbd54-fphkv" podUID="b0a716ce-6354-47ff-896b-1da783a25f3a" Dec 16 03:24:52.350228 containerd[2508]: time="2025-12-16T03:24:52.349638273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 03:24:52.599101 containerd[2508]: time="2025-12-16T03:24:52.599054206Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:24:52.601565 containerd[2508]: time="2025-12-16T03:24:52.601346008Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 03:24:52.601565 containerd[2508]: time="2025-12-16T03:24:52.601364728Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 03:24:52.601742 kubelet[3995]: E1216 03:24:52.601555 3995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:24:52.601742 kubelet[3995]: E1216 03:24:52.601600 3995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:24:52.602111 kubelet[3995]: E1216 03:24:52.601730 3995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e99e60b66e264e9fbdb6300d985b5bad,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ldgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6fd5b56957-fm9l2_calico-system(08adb93e-a5f4-4e36-9d73-5c61441c3142): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 03:24:52.603627 containerd[2508]: time="2025-12-16T03:24:52.603588094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 03:24:52.854780 containerd[2508]: time="2025-12-16T03:24:52.854645854Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:24:52.857034 containerd[2508]: time="2025-12-16T03:24:52.856989037Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 03:24:52.857124 containerd[2508]: time="2025-12-16T03:24:52.857065019Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 03:24:52.857252 kubelet[3995]: E1216 03:24:52.857213 3995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:24:52.857311 kubelet[3995]: E1216 03:24:52.857264 3995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:24:52.857426 kubelet[3995]: E1216 03:24:52.857382 3995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ldgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6fd5b56957-fm9l2_calico-system(08adb93e-a5f4-4e36-9d73-5c61441c3142): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 03:24:52.858552 kubelet[3995]: E1216 03:24:52.858522 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd5b56957-fm9l2" podUID="08adb93e-a5f4-4e36-9d73-5c61441c3142" Dec 16 03:24:55.348216 containerd[2508]: time="2025-12-16T03:24:55.347240998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:24:55.600694 containerd[2508]: time="2025-12-16T03:24:55.600563678Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:24:55.603744 containerd[2508]: time="2025-12-16T03:24:55.603607096Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:24:55.603744 containerd[2508]: time="2025-12-16T03:24:55.603708428Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:24:55.605114 kubelet[3995]: E1216 03:24:55.604018 3995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:24:55.605114 kubelet[3995]: E1216 03:24:55.604068 3995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:24:55.605114 kubelet[3995]: E1216 03:24:55.604249 3995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8kctb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69c4bb98b9-88qzw_calico-apiserver(453a3c95-d107-4f4e-b7f5-ee250655b168): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:24:55.605729 kubelet[3995]: E1216 03:24:55.605688 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69c4bb98b9-88qzw" podUID="453a3c95-d107-4f4e-b7f5-ee250655b168" Dec 16 03:24:57.348044 containerd[2508]: time="2025-12-16T03:24:57.347998196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 03:24:57.589121 containerd[2508]: time="2025-12-16T03:24:57.589025941Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:24:57.591353 containerd[2508]: time="2025-12-16T03:24:57.591326605Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 03:24:57.591464 containerd[2508]: time="2025-12-16T03:24:57.591381939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 03:24:57.591548 kubelet[3995]: E1216 03:24:57.591505 3995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:24:57.591916 kubelet[3995]: E1216 03:24:57.591560 3995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:24:57.591916 kubelet[3995]: E1216 03:24:57.591714 3995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-27fds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-284xb_calico-system(17fccc4a-a08c-4495-a01b-bad3cd3eab43): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 03:24:57.592873 kubelet[3995]: E1216 03:24:57.592829 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-284xb" podUID="17fccc4a-a08c-4495-a01b-bad3cd3eab43" Dec 16 03:24:58.347356 containerd[2508]: time="2025-12-16T03:24:58.347316454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 03:24:58.596297 containerd[2508]: time="2025-12-16T03:24:58.596244360Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:24:58.599036 containerd[2508]: time="2025-12-16T03:24:58.598925927Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 03:24:58.599036 containerd[2508]: time="2025-12-16T03:24:58.598936016Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 03:24:58.599320 kubelet[3995]: E1216 03:24:58.599181 3995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:24:58.599320 kubelet[3995]: E1216 03:24:58.599223 3995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:24:58.599663 kubelet[3995]: E1216 03:24:58.599598 3995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9ks26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-srg9b_calico-system(52f35797-5a94-4b5f-8ac7-147ca2758736): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 03:24:58.599934 containerd[2508]: time="2025-12-16T03:24:58.599887705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:24:58.860032 containerd[2508]: time="2025-12-16T03:24:58.859914097Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:24:58.862398 containerd[2508]: time="2025-12-16T03:24:58.862353274Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:24:58.862496 containerd[2508]: time="2025-12-16T03:24:58.862461998Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:24:58.862596 kubelet[3995]: E1216 03:24:58.862564 3995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:24:58.862668 kubelet[3995]: E1216 03:24:58.862605 3995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:24:58.862854 kubelet[3995]: E1216 03:24:58.862821 3995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqh2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-86474dbd54-65v57_calico-apiserver(928c764d-cf1a-4e24-874a-b4bd241b86e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:24:58.863032 containerd[2508]: time="2025-12-16T03:24:58.863010738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 03:24:58.864452 kubelet[3995]: E1216 03:24:58.864412 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86474dbd54-65v57" podUID="928c764d-cf1a-4e24-874a-b4bd241b86e5" Dec 16 03:24:59.101429 containerd[2508]: time="2025-12-16T03:24:59.101376591Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:24:59.103722 containerd[2508]: time="2025-12-16T03:24:59.103676914Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 03:24:59.103805 containerd[2508]: time="2025-12-16T03:24:59.103687229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 03:24:59.104098 kubelet[3995]: E1216 03:24:59.103869 3995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:24:59.104098 kubelet[3995]: E1216 03:24:59.103907 3995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:24:59.104098 kubelet[3995]: E1216 03:24:59.104054 3995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9ks26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-srg9b_calico-system(52f35797-5a94-4b5f-8ac7-147ca2758736): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 03:24:59.105560 kubelet[3995]: E1216 03:24:59.105512 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-srg9b" podUID="52f35797-5a94-4b5f-8ac7-147ca2758736" Dec 16 03:25:00.347810 containerd[2508]: time="2025-12-16T03:25:00.347765100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 03:25:00.589880 containerd[2508]: time="2025-12-16T03:25:00.589785438Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:25:00.592350 containerd[2508]: time="2025-12-16T03:25:00.592287501Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 03:25:00.594176 containerd[2508]: time="2025-12-16T03:25:00.592379432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 03:25:00.594259 kubelet[3995]: E1216 03:25:00.592507 3995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:25:00.594259 kubelet[3995]: E1216 03:25:00.592563 3995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:25:00.594259 kubelet[3995]: E1216 03:25:00.592701 3995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fzdg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-85c7d9d48b-hc6qj_calico-system(e0164474-95e7-4b01-988d-4ae10762d8d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 03:25:00.594259 kubelet[3995]: E1216 03:25:00.593968 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85c7d9d48b-hc6qj" podUID="e0164474-95e7-4b01-988d-4ae10762d8d3" Dec 16 03:25:04.348203 kubelet[3995]: E1216 03:25:04.348107 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd5b56957-fm9l2" podUID="08adb93e-a5f4-4e36-9d73-5c61441c3142" Dec 16 03:25:05.346465 containerd[2508]: time="2025-12-16T03:25:05.346426601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:25:05.595126 containerd[2508]: time="2025-12-16T03:25:05.595084818Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:25:05.597606 containerd[2508]: time="2025-12-16T03:25:05.597532712Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:25:05.597703 containerd[2508]: time="2025-12-16T03:25:05.597681098Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:25:05.597981 kubelet[3995]: E1216 03:25:05.597934 3995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:25:05.598360 kubelet[3995]: E1216 03:25:05.597990 3995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:25:05.598360 kubelet[3995]: E1216 03:25:05.598124 3995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8pwl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-86474dbd54-fphkv_calico-apiserver(b0a716ce-6354-47ff-896b-1da783a25f3a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:25:05.599298 kubelet[3995]: E1216 03:25:05.599272 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86474dbd54-fphkv" podUID="b0a716ce-6354-47ff-896b-1da783a25f3a" Dec 16 03:25:06.427608 kernel: kauditd_printk_skb: 173 callbacks suppressed Dec 16 03:25:06.427725 kernel: audit: type=1130 audit(1765855506.419:783): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.23:22-10.200.16.10:45012 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:06.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.23:22-10.200.16.10:45012 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:06.420694 systemd[1]: Started sshd@7-10.200.8.23:22-10.200.16.10:45012.service - OpenSSH per-connection server daemon (10.200.16.10:45012). Dec 16 03:25:06.968243 sshd[6169]: Accepted publickey for core from 10.200.16.10 port 45012 ssh2: RSA SHA256:tVTDUNW947aAkFL4niSGbJit7KfQLURL9mjv39l1lSw Dec 16 03:25:06.977200 kernel: audit: type=1101 audit(1765855506.966:784): pid=6169 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:06.966000 audit[6169]: USER_ACCT pid=6169 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:06.978608 sshd-session[6169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:25:06.976000 audit[6169]: CRED_ACQ pid=6169 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:06.990230 kernel: audit: type=1103 audit(1765855506.976:785): pid=6169 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:06.996523 kernel: audit: type=1006 audit(1765855506.976:786): pid=6169 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 16 03:25:06.996132 systemd-logind[2488]: New session 11 of user core. Dec 16 03:25:06.976000 audit[6169]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff57c2f80 a2=3 a3=0 items=0 ppid=1 pid=6169 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:07.004160 kernel: audit: type=1300 audit(1765855506.976:786): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff57c2f80 a2=3 a3=0 items=0 ppid=1 pid=6169 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:06.976000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:25:07.007780 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 03:25:07.008156 kernel: audit: type=1327 audit(1765855506.976:786): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:25:07.019189 kernel: audit: type=1105 audit(1765855507.011:787): pid=6169 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:07.011000 audit[6169]: USER_START pid=6169 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:07.018000 audit[6173]: CRED_ACQ pid=6173 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:07.028165 kernel: audit: type=1103 audit(1765855507.018:788): pid=6173 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:07.464972 sshd[6173]: Connection closed by 10.200.16.10 port 45012 Dec 16 03:25:07.466793 sshd-session[6169]: pam_unix(sshd:session): session closed for user core Dec 16 03:25:07.466000 audit[6169]: USER_END pid=6169 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:07.471151 systemd[1]: sshd@7-10.200.8.23:22-10.200.16.10:45012.service: Deactivated successfully. Dec 16 03:25:07.473229 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 03:25:07.466000 audit[6169]: CRED_DISP pid=6169 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:07.476795 systemd-logind[2488]: Session 11 logged out. Waiting for processes to exit. Dec 16 03:25:07.477587 systemd-logind[2488]: Removed session 11. Dec 16 03:25:07.480846 kernel: audit: type=1106 audit(1765855507.466:789): pid=6169 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:07.481072 kernel: audit: type=1104 audit(1765855507.466:790): pid=6169 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:07.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.23:22-10.200.16.10:45012 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:08.347816 kubelet[3995]: E1216 03:25:08.347406 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69c4bb98b9-88qzw" podUID="453a3c95-d107-4f4e-b7f5-ee250655b168" Dec 16 03:25:11.346194 kubelet[3995]: E1216 03:25:11.346108 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-284xb" podUID="17fccc4a-a08c-4495-a01b-bad3cd3eab43" Dec 16 03:25:12.349584 kubelet[3995]: E1216 03:25:12.349535 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86474dbd54-65v57" podUID="928c764d-cf1a-4e24-874a-b4bd241b86e5" Dec 16 03:25:12.586597 systemd[1]: Started sshd@8-10.200.8.23:22-10.200.16.10:41428.service - OpenSSH per-connection server daemon (10.200.16.10:41428). Dec 16 03:25:12.594478 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:25:12.594518 kernel: audit: type=1130 audit(1765855512.585:792): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.23:22-10.200.16.10:41428 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:12.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.23:22-10.200.16.10:41428 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:13.138519 sshd[6210]: Accepted publickey for core from 10.200.16.10 port 41428 ssh2: RSA SHA256:tVTDUNW947aAkFL4niSGbJit7KfQLURL9mjv39l1lSw Dec 16 03:25:13.137000 audit[6210]: USER_ACCT pid=6210 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:13.140622 sshd-session[6210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:25:13.153271 kernel: audit: type=1101 audit(1765855513.137:793): pid=6210 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:13.153357 kernel: audit: type=1103 audit(1765855513.137:794): pid=6210 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:13.137000 audit[6210]: CRED_ACQ pid=6210 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:13.168010 kernel: audit: type=1006 audit(1765855513.137:795): pid=6210 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Dec 16 03:25:13.168088 kernel: audit: type=1300 audit(1765855513.137:795): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb9b7f3a0 a2=3 a3=0 items=0 ppid=1 pid=6210 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:13.137000 audit[6210]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb9b7f3a0 a2=3 a3=0 items=0 ppid=1 pid=6210 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:13.171201 kernel: audit: type=1327 audit(1765855513.137:795): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:25:13.137000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:25:13.174530 systemd-logind[2488]: New session 12 of user core. Dec 16 03:25:13.180573 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 03:25:13.182000 audit[6210]: USER_START pid=6210 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:13.193254 kernel: audit: type=1105 audit(1765855513.182:796): pid=6210 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:13.192000 audit[6214]: CRED_ACQ pid=6214 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:13.206152 kernel: audit: type=1103 audit(1765855513.192:797): pid=6214 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:13.524504 sshd[6214]: Connection closed by 10.200.16.10 port 41428 Dec 16 03:25:13.525343 sshd-session[6210]: pam_unix(sshd:session): session closed for user core Dec 16 03:25:13.525000 audit[6210]: USER_END pid=6210 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:13.530943 systemd[1]: sshd@8-10.200.8.23:22-10.200.16.10:41428.service: Deactivated successfully. Dec 16 03:25:13.534046 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 03:25:13.527000 audit[6210]: CRED_DISP pid=6210 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:13.536033 systemd-logind[2488]: Session 12 logged out. Waiting for processes to exit. Dec 16 03:25:13.538530 systemd-logind[2488]: Removed session 12. Dec 16 03:25:13.539937 kernel: audit: type=1106 audit(1765855513.525:798): pid=6210 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:13.540000 kernel: audit: type=1104 audit(1765855513.527:799): pid=6210 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:13.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.23:22-10.200.16.10:41428 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:14.348544 kubelet[3995]: E1216 03:25:14.348432 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85c7d9d48b-hc6qj" podUID="e0164474-95e7-4b01-988d-4ae10762d8d3" Dec 16 03:25:14.350000 kubelet[3995]: E1216 03:25:14.348778 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-srg9b" podUID="52f35797-5a94-4b5f-8ac7-147ca2758736" Dec 16 03:25:16.347592 kubelet[3995]: E1216 03:25:16.347156 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86474dbd54-fphkv" podUID="b0a716ce-6354-47ff-896b-1da783a25f3a" Dec 16 03:25:17.348947 kubelet[3995]: E1216 03:25:17.348884 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd5b56957-fm9l2" podUID="08adb93e-a5f4-4e36-9d73-5c61441c3142" Dec 16 03:25:18.644323 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:25:18.644429 kernel: audit: type=1130 audit(1765855518.636:801): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.23:22-10.200.16.10:41432 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:18.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.23:22-10.200.16.10:41432 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:18.637435 systemd[1]: Started sshd@9-10.200.8.23:22-10.200.16.10:41432.service - OpenSSH per-connection server daemon (10.200.16.10:41432). Dec 16 03:25:19.178000 audit[6227]: USER_ACCT pid=6227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:19.180176 sshd[6227]: Accepted publickey for core from 10.200.16.10 port 41432 ssh2: RSA SHA256:tVTDUNW947aAkFL4niSGbJit7KfQLURL9mjv39l1lSw Dec 16 03:25:19.181793 sshd-session[6227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:25:19.187173 kernel: audit: type=1101 audit(1765855519.178:802): pid=6227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:19.178000 audit[6227]: CRED_ACQ pid=6227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:19.187532 systemd-logind[2488]: New session 13 of user core. Dec 16 03:25:19.196293 kernel: audit: type=1103 audit(1765855519.178:803): pid=6227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:19.196396 kernel: audit: type=1006 audit(1765855519.178:804): pid=6227 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 16 03:25:19.178000 audit[6227]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe04952680 a2=3 a3=0 items=0 ppid=1 pid=6227 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:19.178000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:25:19.208456 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 03:25:19.211991 kernel: audit: type=1300 audit(1765855519.178:804): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe04952680 a2=3 a3=0 items=0 ppid=1 pid=6227 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:19.212081 kernel: audit: type=1327 audit(1765855519.178:804): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:25:19.214000 audit[6227]: USER_START pid=6227 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:19.217000 audit[6231]: CRED_ACQ pid=6231 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:19.229925 kernel: audit: type=1105 audit(1765855519.214:805): pid=6227 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:19.229992 kernel: audit: type=1103 audit(1765855519.217:806): pid=6231 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:19.709251 sshd[6231]: Connection closed by 10.200.16.10 port 41432 Dec 16 03:25:19.712306 sshd-session[6227]: pam_unix(sshd:session): session closed for user core Dec 16 03:25:19.712000 audit[6227]: USER_END pid=6227 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:19.716471 systemd-logind[2488]: Session 13 logged out. Waiting for processes to exit. Dec 16 03:25:19.718174 systemd[1]: sshd@9-10.200.8.23:22-10.200.16.10:41432.service: Deactivated successfully. Dec 16 03:25:19.721047 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 03:25:19.723250 systemd-logind[2488]: Removed session 13. Dec 16 03:25:19.724160 kernel: audit: type=1106 audit(1765855519.712:807): pid=6227 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:19.712000 audit[6227]: CRED_DISP pid=6227 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:19.730161 kernel: audit: type=1104 audit(1765855519.712:808): pid=6227 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:19.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.23:22-10.200.16.10:41432 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:19.823720 systemd[1]: Started sshd@10-10.200.8.23:22-10.200.16.10:41438.service - OpenSSH per-connection server daemon (10.200.16.10:41438). Dec 16 03:25:19.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.8.23:22-10.200.16.10:41438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:20.350585 kubelet[3995]: E1216 03:25:20.350535 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69c4bb98b9-88qzw" podUID="453a3c95-d107-4f4e-b7f5-ee250655b168" Dec 16 03:25:20.366000 audit[6246]: USER_ACCT pid=6246 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:20.367000 audit[6246]: CRED_ACQ pid=6246 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:20.367000 audit[6246]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff0dfdec30 a2=3 a3=0 items=0 ppid=1 pid=6246 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:20.367000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:25:20.369433 sshd[6246]: Accepted publickey for core from 10.200.16.10 port 41438 ssh2: RSA SHA256:tVTDUNW947aAkFL4niSGbJit7KfQLURL9mjv39l1lSw Dec 16 03:25:20.369801 sshd-session[6246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:25:20.375061 systemd-logind[2488]: New session 14 of user core. Dec 16 03:25:20.381440 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 03:25:20.385000 audit[6246]: USER_START pid=6246 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:20.387000 audit[6250]: CRED_ACQ pid=6250 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:20.855029 sshd[6250]: Connection closed by 10.200.16.10 port 41438 Dec 16 03:25:20.855569 sshd-session[6246]: pam_unix(sshd:session): session closed for user core Dec 16 03:25:20.855000 audit[6246]: USER_END pid=6246 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:20.855000 audit[6246]: CRED_DISP pid=6246 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:20.859376 systemd[1]: sshd@10-10.200.8.23:22-10.200.16.10:41438.service: Deactivated successfully. Dec 16 03:25:20.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.8.23:22-10.200.16.10:41438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:20.861423 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 03:25:20.862255 systemd-logind[2488]: Session 14 logged out. Waiting for processes to exit. Dec 16 03:25:20.863884 systemd-logind[2488]: Removed session 14. Dec 16 03:25:20.967749 systemd[1]: Started sshd@11-10.200.8.23:22-10.200.16.10:43582.service - OpenSSH per-connection server daemon (10.200.16.10:43582). Dec 16 03:25:20.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.8.23:22-10.200.16.10:43582 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:21.499000 audit[6260]: USER_ACCT pid=6260 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:21.501242 sshd[6260]: Accepted publickey for core from 10.200.16.10 port 43582 ssh2: RSA SHA256:tVTDUNW947aAkFL4niSGbJit7KfQLURL9mjv39l1lSw Dec 16 03:25:21.500000 audit[6260]: CRED_ACQ pid=6260 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:21.500000 audit[6260]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc99b7f7b0 a2=3 a3=0 items=0 ppid=1 pid=6260 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:21.500000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:25:21.502825 sshd-session[6260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:25:21.507598 systemd-logind[2488]: New session 15 of user core. Dec 16 03:25:21.512334 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 03:25:21.513000 audit[6260]: USER_START pid=6260 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:21.515000 audit[6264]: CRED_ACQ pid=6264 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:22.097551 sshd[6264]: Connection closed by 10.200.16.10 port 43582 Dec 16 03:25:22.098094 sshd-session[6260]: pam_unix(sshd:session): session closed for user core Dec 16 03:25:22.098000 audit[6260]: USER_END pid=6260 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:22.098000 audit[6260]: CRED_DISP pid=6260 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:22.101250 systemd[1]: sshd@11-10.200.8.23:22-10.200.16.10:43582.service: Deactivated successfully. Dec 16 03:25:22.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.8.23:22-10.200.16.10:43582 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:22.103133 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 03:25:22.105798 systemd-logind[2488]: Session 15 logged out. Waiting for processes to exit. Dec 16 03:25:22.106907 systemd-logind[2488]: Removed session 15. Dec 16 03:25:23.348028 kubelet[3995]: E1216 03:25:23.347695 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86474dbd54-65v57" podUID="928c764d-cf1a-4e24-874a-b4bd241b86e5" Dec 16 03:25:24.348312 kubelet[3995]: E1216 03:25:24.347720 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-284xb" podUID="17fccc4a-a08c-4495-a01b-bad3cd3eab43" Dec 16 03:25:25.346168 kubelet[3995]: E1216 03:25:25.346098 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85c7d9d48b-hc6qj" podUID="e0164474-95e7-4b01-988d-4ae10762d8d3" Dec 16 03:25:27.215493 systemd[1]: Started sshd@12-10.200.8.23:22-10.200.16.10:43590.service - OpenSSH per-connection server daemon (10.200.16.10:43590). Dec 16 03:25:27.219482 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 16 03:25:27.219639 kernel: audit: type=1130 audit(1765855527.214:828): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.23:22-10.200.16.10:43590 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:27.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.23:22-10.200.16.10:43590 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:27.767000 audit[6281]: USER_ACCT pid=6281 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:27.772957 sshd[6281]: Accepted publickey for core from 10.200.16.10 port 43590 ssh2: RSA SHA256:tVTDUNW947aAkFL4niSGbJit7KfQLURL9mjv39l1lSw Dec 16 03:25:27.774182 kernel: audit: type=1101 audit(1765855527.767:829): pid=6281 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:27.772000 audit[6281]: CRED_ACQ pid=6281 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:27.775124 sshd-session[6281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:25:27.787799 kernel: audit: type=1103 audit(1765855527.772:830): pid=6281 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:27.787878 kernel: audit: type=1006 audit(1765855527.772:831): pid=6281 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 16 03:25:27.792830 systemd-logind[2488]: New session 16 of user core. Dec 16 03:25:27.772000 audit[6281]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcdbd87b80 a2=3 a3=0 items=0 ppid=1 pid=6281 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:27.799267 kernel: audit: type=1300 audit(1765855527.772:831): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcdbd87b80 a2=3 a3=0 items=0 ppid=1 pid=6281 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:27.772000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:25:27.801805 kernel: audit: type=1327 audit(1765855527.772:831): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:25:27.801251 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 03:25:27.804000 audit[6281]: USER_START pid=6281 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:27.812252 kernel: audit: type=1105 audit(1765855527.804:832): pid=6281 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:27.817172 kernel: audit: type=1103 audit(1765855527.811:833): pid=6285 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:27.811000 audit[6285]: CRED_ACQ pid=6285 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:28.160156 sshd[6285]: Connection closed by 10.200.16.10 port 43590 Dec 16 03:25:28.160365 sshd-session[6281]: pam_unix(sshd:session): session closed for user core Dec 16 03:25:28.162000 audit[6281]: USER_END pid=6281 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:28.173162 kernel: audit: type=1106 audit(1765855528.162:834): pid=6281 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:28.162000 audit[6281]: CRED_DISP pid=6281 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:28.175888 systemd[1]: sshd@12-10.200.8.23:22-10.200.16.10:43590.service: Deactivated successfully. Dec 16 03:25:28.181630 kernel: audit: type=1104 audit(1765855528.162:835): pid=6281 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:28.181406 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 03:25:28.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.23:22-10.200.16.10:43590 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:28.183176 systemd-logind[2488]: Session 16 logged out. Waiting for processes to exit. Dec 16 03:25:28.185304 systemd-logind[2488]: Removed session 16. Dec 16 03:25:28.347254 kubelet[3995]: E1216 03:25:28.346975 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86474dbd54-fphkv" podUID="b0a716ce-6354-47ff-896b-1da783a25f3a" Dec 16 03:25:29.346933 kubelet[3995]: E1216 03:25:29.346809 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-srg9b" podUID="52f35797-5a94-4b5f-8ac7-147ca2758736" Dec 16 03:25:32.350502 kubelet[3995]: E1216 03:25:32.350453 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd5b56957-fm9l2" podUID="08adb93e-a5f4-4e36-9d73-5c61441c3142" Dec 16 03:25:33.296464 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:25:33.296577 kernel: audit: type=1130 audit(1765855533.294:837): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.23:22-10.200.16.10:51378 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:33.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.23:22-10.200.16.10:51378 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:33.295434 systemd[1]: Started sshd@13-10.200.8.23:22-10.200.16.10:51378.service - OpenSSH per-connection server daemon (10.200.16.10:51378). Dec 16 03:25:33.346776 kubelet[3995]: E1216 03:25:33.346452 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69c4bb98b9-88qzw" podUID="453a3c95-d107-4f4e-b7f5-ee250655b168" Dec 16 03:25:33.860592 kernel: audit: type=1101 audit(1765855533.852:838): pid=6302 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:33.852000 audit[6302]: USER_ACCT pid=6302 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:33.858036 sshd-session[6302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:25:33.860990 sshd[6302]: Accepted publickey for core from 10.200.16.10 port 51378 ssh2: RSA SHA256:tVTDUNW947aAkFL4niSGbJit7KfQLURL9mjv39l1lSw Dec 16 03:25:33.855000 audit[6302]: CRED_ACQ pid=6302 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:33.866454 systemd-logind[2488]: New session 17 of user core. Dec 16 03:25:33.870699 kernel: audit: type=1103 audit(1765855533.855:839): pid=6302 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:33.870763 kernel: audit: type=1006 audit(1765855533.856:840): pid=6302 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 16 03:25:33.876223 kernel: audit: type=1300 audit(1765855533.856:840): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff0ba40f20 a2=3 a3=0 items=0 ppid=1 pid=6302 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:33.856000 audit[6302]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff0ba40f20 a2=3 a3=0 items=0 ppid=1 pid=6302 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:33.878219 kernel: audit: type=1327 audit(1765855533.856:840): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:25:33.856000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:25:33.878417 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 03:25:33.882000 audit[6302]: USER_START pid=6302 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:33.896723 kernel: audit: type=1105 audit(1765855533.882:841): pid=6302 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:33.896789 kernel: audit: type=1103 audit(1765855533.890:842): pid=6306 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:33.890000 audit[6306]: CRED_ACQ pid=6306 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:34.279572 sshd[6306]: Connection closed by 10.200.16.10 port 51378 Dec 16 03:25:34.282337 sshd-session[6302]: pam_unix(sshd:session): session closed for user core Dec 16 03:25:34.282000 audit[6302]: USER_END pid=6302 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:34.294697 kernel: audit: type=1106 audit(1765855534.282:843): pid=6302 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:34.294413 systemd-logind[2488]: Session 17 logged out. Waiting for processes to exit. Dec 16 03:25:34.295660 systemd[1]: sshd@13-10.200.8.23:22-10.200.16.10:51378.service: Deactivated successfully. Dec 16 03:25:34.298790 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 03:25:34.283000 audit[6302]: CRED_DISP pid=6302 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:34.302201 systemd-logind[2488]: Removed session 17. Dec 16 03:25:34.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.23:22-10.200.16.10:51378 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:34.308173 kernel: audit: type=1104 audit(1765855534.283:844): pid=6302 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:38.348544 kubelet[3995]: E1216 03:25:38.348502 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86474dbd54-65v57" podUID="928c764d-cf1a-4e24-874a-b4bd241b86e5" Dec 16 03:25:39.347376 containerd[2508]: time="2025-12-16T03:25:39.347333841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 03:25:39.347837 kubelet[3995]: E1216 03:25:39.347808 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86474dbd54-fphkv" podUID="b0a716ce-6354-47ff-896b-1da783a25f3a" Dec 16 03:25:39.391385 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:25:39.391488 kernel: audit: type=1130 audit(1765855539.389:846): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.23:22-10.200.16.10:51384 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:39.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.23:22-10.200.16.10:51384 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:39.389692 systemd[1]: Started sshd@14-10.200.8.23:22-10.200.16.10:51384.service - OpenSSH per-connection server daemon (10.200.16.10:51384). Dec 16 03:25:39.593369 containerd[2508]: time="2025-12-16T03:25:39.593327238Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:25:39.596003 containerd[2508]: time="2025-12-16T03:25:39.595964400Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 03:25:39.596071 containerd[2508]: time="2025-12-16T03:25:39.596052980Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 03:25:39.596318 kubelet[3995]: E1216 03:25:39.596277 3995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:25:39.596654 kubelet[3995]: E1216 03:25:39.596329 3995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:25:39.596654 kubelet[3995]: E1216 03:25:39.596477 3995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-27fds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-284xb_calico-system(17fccc4a-a08c-4495-a01b-bad3cd3eab43): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 03:25:39.597703 kubelet[3995]: E1216 03:25:39.597631 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-284xb" podUID="17fccc4a-a08c-4495-a01b-bad3cd3eab43" Dec 16 03:25:39.933000 audit[6341]: USER_ACCT pid=6341 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:39.938931 sshd-session[6341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:25:39.940041 sshd[6341]: Accepted publickey for core from 10.200.16.10 port 51384 ssh2: RSA SHA256:tVTDUNW947aAkFL4niSGbJit7KfQLURL9mjv39l1lSw Dec 16 03:25:39.940656 kernel: audit: type=1101 audit(1765855539.933:847): pid=6341 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:39.940912 kernel: audit: type=1103 audit(1765855539.936:848): pid=6341 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:39.936000 audit[6341]: CRED_ACQ pid=6341 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:39.948157 kernel: audit: type=1006 audit(1765855539.936:849): pid=6341 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Dec 16 03:25:39.948225 kernel: audit: type=1300 audit(1765855539.936:849): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe7006d480 a2=3 a3=0 items=0 ppid=1 pid=6341 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:39.936000 audit[6341]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe7006d480 a2=3 a3=0 items=0 ppid=1 pid=6341 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:39.952091 systemd-logind[2488]: New session 18 of user core. Dec 16 03:25:39.936000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:25:39.955337 kernel: audit: type=1327 audit(1765855539.936:849): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:25:39.961296 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 03:25:39.962000 audit[6341]: USER_START pid=6341 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:39.970162 kernel: audit: type=1105 audit(1765855539.962:850): pid=6341 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:39.970235 kernel: audit: type=1103 audit(1765855539.967:851): pid=6345 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:39.967000 audit[6345]: CRED_ACQ pid=6345 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:40.299758 sshd[6345]: Connection closed by 10.200.16.10 port 51384 Dec 16 03:25:40.300313 sshd-session[6341]: pam_unix(sshd:session): session closed for user core Dec 16 03:25:40.301000 audit[6341]: USER_END pid=6341 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:40.307447 systemd[1]: sshd@14-10.200.8.23:22-10.200.16.10:51384.service: Deactivated successfully. Dec 16 03:25:40.311214 kernel: audit: type=1106 audit(1765855540.301:852): pid=6341 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:40.312201 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 03:25:40.301000 audit[6341]: CRED_DISP pid=6341 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:40.316643 systemd-logind[2488]: Session 18 logged out. Waiting for processes to exit. Dec 16 03:25:40.318164 kernel: audit: type=1104 audit(1765855540.301:853): pid=6341 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:40.320517 systemd-logind[2488]: Removed session 18. Dec 16 03:25:40.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.23:22-10.200.16.10:51384 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:40.349409 kubelet[3995]: E1216 03:25:40.349380 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85c7d9d48b-hc6qj" podUID="e0164474-95e7-4b01-988d-4ae10762d8d3" Dec 16 03:25:43.348561 containerd[2508]: time="2025-12-16T03:25:43.348508361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 03:25:43.587010 containerd[2508]: time="2025-12-16T03:25:43.586965775Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:25:43.589668 containerd[2508]: time="2025-12-16T03:25:43.589641497Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 03:25:43.589767 containerd[2508]: time="2025-12-16T03:25:43.589690642Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 03:25:43.589892 kubelet[3995]: E1216 03:25:43.589860 3995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:25:43.590219 kubelet[3995]: E1216 03:25:43.589909 3995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:25:43.590511 kubelet[3995]: E1216 03:25:43.590458 3995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9ks26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-srg9b_calico-system(52f35797-5a94-4b5f-8ac7-147ca2758736): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 03:25:43.599483 containerd[2508]: time="2025-12-16T03:25:43.598542232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 03:25:43.834698 containerd[2508]: time="2025-12-16T03:25:43.834654616Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:25:43.837319 containerd[2508]: time="2025-12-16T03:25:43.837265011Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 03:25:43.837389 containerd[2508]: time="2025-12-16T03:25:43.837358336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 03:25:43.837565 kubelet[3995]: E1216 03:25:43.837517 3995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:25:43.837619 kubelet[3995]: E1216 03:25:43.837571 3995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:25:43.837745 kubelet[3995]: E1216 03:25:43.837713 3995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9ks26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-srg9b_calico-system(52f35797-5a94-4b5f-8ac7-147ca2758736): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 03:25:43.839652 kubelet[3995]: E1216 03:25:43.839621 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-srg9b" podUID="52f35797-5a94-4b5f-8ac7-147ca2758736" Dec 16 03:25:44.347500 containerd[2508]: time="2025-12-16T03:25:44.347187883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:25:44.586052 containerd[2508]: time="2025-12-16T03:25:44.586006202Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:25:44.588489 containerd[2508]: time="2025-12-16T03:25:44.588462162Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:25:44.588590 containerd[2508]: time="2025-12-16T03:25:44.588538335Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:25:44.588735 kubelet[3995]: E1216 03:25:44.588687 3995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:25:44.588816 kubelet[3995]: E1216 03:25:44.588749 3995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:25:44.588957 kubelet[3995]: E1216 03:25:44.588888 3995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8kctb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69c4bb98b9-88qzw_calico-apiserver(453a3c95-d107-4f4e-b7f5-ee250655b168): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:25:44.590289 kubelet[3995]: E1216 03:25:44.590206 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69c4bb98b9-88qzw" podUID="453a3c95-d107-4f4e-b7f5-ee250655b168" Dec 16 03:25:45.414537 systemd[1]: Started sshd@15-10.200.8.23:22-10.200.16.10:38032.service - OpenSSH per-connection server daemon (10.200.16.10:38032). Dec 16 03:25:45.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.8.23:22-10.200.16.10:38032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:45.415506 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:25:45.415556 kernel: audit: type=1130 audit(1765855545.413:855): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.8.23:22-10.200.16.10:38032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:45.966000 audit[6359]: USER_ACCT pid=6359 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:45.972455 sshd-session[6359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:25:45.970000 audit[6359]: CRED_ACQ pid=6359 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:45.975395 sshd[6359]: Accepted publickey for core from 10.200.16.10 port 38032 ssh2: RSA SHA256:tVTDUNW947aAkFL4niSGbJit7KfQLURL9mjv39l1lSw Dec 16 03:25:45.976691 kernel: audit: type=1101 audit(1765855545.966:856): pid=6359 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:45.976762 kernel: audit: type=1103 audit(1765855545.970:857): pid=6359 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:45.981177 kernel: audit: type=1006 audit(1765855545.970:858): pid=6359 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Dec 16 03:25:45.970000 audit[6359]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff2a176be0 a2=3 a3=0 items=0 ppid=1 pid=6359 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:45.970000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:25:45.987067 kernel: audit: type=1300 audit(1765855545.970:858): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff2a176be0 a2=3 a3=0 items=0 ppid=1 pid=6359 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:45.987183 kernel: audit: type=1327 audit(1765855545.970:858): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:25:45.988439 systemd-logind[2488]: New session 19 of user core. Dec 16 03:25:46.002353 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 03:25:46.006000 audit[6359]: USER_START pid=6359 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:46.014152 kernel: audit: type=1105 audit(1765855546.006:859): pid=6359 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:46.014000 audit[6363]: CRED_ACQ pid=6363 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:46.023171 kernel: audit: type=1103 audit(1765855546.014:860): pid=6363 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:46.356944 sshd[6363]: Connection closed by 10.200.16.10 port 38032 Dec 16 03:25:46.358186 sshd-session[6359]: pam_unix(sshd:session): session closed for user core Dec 16 03:25:46.358000 audit[6359]: USER_END pid=6359 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:46.366506 systemd[1]: sshd@15-10.200.8.23:22-10.200.16.10:38032.service: Deactivated successfully. Dec 16 03:25:46.358000 audit[6359]: CRED_DISP pid=6359 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:46.369932 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 03:25:46.371237 systemd-logind[2488]: Session 19 logged out. Waiting for processes to exit. Dec 16 03:25:46.372950 kernel: audit: type=1106 audit(1765855546.358:861): pid=6359 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:46.373007 kernel: audit: type=1104 audit(1765855546.358:862): pid=6359 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:46.374779 systemd-logind[2488]: Removed session 19. Dec 16 03:25:46.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.8.23:22-10.200.16.10:38032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:46.472889 systemd[1]: Started sshd@16-10.200.8.23:22-10.200.16.10:38038.service - OpenSSH per-connection server daemon (10.200.16.10:38038). Dec 16 03:25:46.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.8.23:22-10.200.16.10:38038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:47.028986 sshd[6375]: Accepted publickey for core from 10.200.16.10 port 38038 ssh2: RSA SHA256:tVTDUNW947aAkFL4niSGbJit7KfQLURL9mjv39l1lSw Dec 16 03:25:47.027000 audit[6375]: USER_ACCT pid=6375 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:47.028000 audit[6375]: CRED_ACQ pid=6375 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:47.028000 audit[6375]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd0e88050 a2=3 a3=0 items=0 ppid=1 pid=6375 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:47.028000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:25:47.030968 sshd-session[6375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:25:47.040785 systemd-logind[2488]: New session 20 of user core. Dec 16 03:25:47.044344 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 03:25:47.046000 audit[6375]: USER_START pid=6375 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:47.048000 audit[6384]: CRED_ACQ pid=6384 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:47.346942 containerd[2508]: time="2025-12-16T03:25:47.346388419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 03:25:47.449104 sshd[6384]: Connection closed by 10.200.16.10 port 38038 Dec 16 03:25:47.449618 sshd-session[6375]: pam_unix(sshd:session): session closed for user core Dec 16 03:25:47.449000 audit[6375]: USER_END pid=6375 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:47.449000 audit[6375]: CRED_DISP pid=6375 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:47.453530 systemd-logind[2488]: Session 20 logged out. Waiting for processes to exit. Dec 16 03:25:47.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.8.23:22-10.200.16.10:38038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:47.453689 systemd[1]: sshd@16-10.200.8.23:22-10.200.16.10:38038.service: Deactivated successfully. Dec 16 03:25:47.456085 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 03:25:47.457880 systemd-logind[2488]: Removed session 20. Dec 16 03:25:47.563674 systemd[1]: Started sshd@17-10.200.8.23:22-10.200.16.10:38052.service - OpenSSH per-connection server daemon (10.200.16.10:38052). Dec 16 03:25:47.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.8.23:22-10.200.16.10:38052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:47.584158 containerd[2508]: time="2025-12-16T03:25:47.583283737Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:25:47.585974 containerd[2508]: time="2025-12-16T03:25:47.585728498Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 03:25:47.586527 containerd[2508]: time="2025-12-16T03:25:47.585899938Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 03:25:47.587151 kubelet[3995]: E1216 03:25:47.586734 3995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:25:47.587151 kubelet[3995]: E1216 03:25:47.586783 3995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:25:47.587151 kubelet[3995]: E1216 03:25:47.586912 3995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e99e60b66e264e9fbdb6300d985b5bad,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ldgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6fd5b56957-fm9l2_calico-system(08adb93e-a5f4-4e36-9d73-5c61441c3142): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 03:25:47.589474 containerd[2508]: time="2025-12-16T03:25:47.589443362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 03:25:47.831472 containerd[2508]: time="2025-12-16T03:25:47.831427430Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:25:47.833822 containerd[2508]: time="2025-12-16T03:25:47.833787543Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 03:25:47.833924 containerd[2508]: time="2025-12-16T03:25:47.833794944Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 03:25:47.834065 kubelet[3995]: E1216 03:25:47.833967 3995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:25:47.834121 kubelet[3995]: E1216 03:25:47.834077 3995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:25:47.834327 kubelet[3995]: E1216 03:25:47.834277 3995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ldgsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6fd5b56957-fm9l2_calico-system(08adb93e-a5f4-4e36-9d73-5c61441c3142): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 03:25:47.835454 kubelet[3995]: E1216 03:25:47.835431 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd5b56957-fm9l2" podUID="08adb93e-a5f4-4e36-9d73-5c61441c3142" Dec 16 03:25:48.112000 audit[6404]: USER_ACCT pid=6404 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:48.113630 sshd[6404]: Accepted publickey for core from 10.200.16.10 port 38052 ssh2: RSA SHA256:tVTDUNW947aAkFL4niSGbJit7KfQLURL9mjv39l1lSw Dec 16 03:25:48.113000 audit[6404]: CRED_ACQ pid=6404 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:48.113000 audit[6404]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd6a2e3a50 a2=3 a3=0 items=0 ppid=1 pid=6404 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:48.113000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:25:48.115419 sshd-session[6404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:25:48.120205 systemd-logind[2488]: New session 21 of user core. Dec 16 03:25:48.124300 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 03:25:48.125000 audit[6404]: USER_START pid=6404 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:48.127000 audit[6408]: CRED_ACQ pid=6408 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:50.348229 containerd[2508]: time="2025-12-16T03:25:50.348182512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:25:50.599703 containerd[2508]: time="2025-12-16T03:25:50.599577582Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:25:50.601980 containerd[2508]: time="2025-12-16T03:25:50.601938604Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:25:50.602065 containerd[2508]: time="2025-12-16T03:25:50.602011813Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:25:50.602198 kubelet[3995]: E1216 03:25:50.602162 3995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:25:50.602591 kubelet[3995]: E1216 03:25:50.602209 3995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:25:50.602591 kubelet[3995]: E1216 03:25:50.602375 3995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8pwl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-86474dbd54-fphkv_calico-apiserver(b0a716ce-6354-47ff-896b-1da783a25f3a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:25:50.603880 kubelet[3995]: E1216 03:25:50.603828 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86474dbd54-fphkv" podUID="b0a716ce-6354-47ff-896b-1da783a25f3a" Dec 16 03:25:50.965000 audit[6423]: NETFILTER_CFG table=filter:154 family=2 entries=26 op=nft_register_rule pid=6423 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:25:50.968350 kernel: kauditd_printk_skb: 20 callbacks suppressed Dec 16 03:25:50.968383 kernel: audit: type=1325 audit(1765855550.965:879): table=filter:154 family=2 entries=26 op=nft_register_rule pid=6423 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:25:50.965000 audit[6423]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffd116646d0 a2=0 a3=7ffd116646bc items=0 ppid=4100 pid=6423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:50.965000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:25:50.981706 kernel: audit: type=1300 audit(1765855550.965:879): arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffd116646d0 a2=0 a3=7ffd116646bc items=0 ppid=4100 pid=6423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:50.981801 kernel: audit: type=1327 audit(1765855550.965:879): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:25:50.972000 audit[6423]: NETFILTER_CFG table=nat:155 family=2 entries=20 op=nft_register_rule pid=6423 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:25:50.972000 audit[6423]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd116646d0 a2=0 a3=0 items=0 ppid=4100 pid=6423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:50.991754 kernel: audit: type=1325 audit(1765855550.972:880): table=nat:155 family=2 entries=20 op=nft_register_rule pid=6423 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:25:50.991812 kernel: audit: type=1300 audit(1765855550.972:880): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd116646d0 a2=0 a3=0 items=0 ppid=4100 pid=6423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:50.996210 kernel: audit: type=1327 audit(1765855550.972:880): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:25:50.972000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:25:51.215000 audit[6432]: NETFILTER_CFG table=filter:156 family=2 entries=38 op=nft_register_rule pid=6432 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:25:51.215000 audit[6432]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc11b57000 a2=0 a3=7ffc11b56fec items=0 ppid=4100 pid=6432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:51.225059 kernel: audit: type=1325 audit(1765855551.215:881): table=filter:156 family=2 entries=38 op=nft_register_rule pid=6432 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:25:51.225114 kernel: audit: type=1300 audit(1765855551.215:881): arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc11b57000 a2=0 a3=7ffc11b56fec items=0 ppid=4100 pid=6432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:51.215000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:25:51.229396 kernel: audit: type=1327 audit(1765855551.215:881): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:25:51.230473 kernel: audit: type=1325 audit(1765855551.224:882): table=nat:157 family=2 entries=20 op=nft_register_rule pid=6432 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:25:51.224000 audit[6432]: NETFILTER_CFG table=nat:157 family=2 entries=20 op=nft_register_rule pid=6432 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:25:51.224000 audit[6432]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc11b57000 a2=0 a3=0 items=0 ppid=4100 pid=6432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:51.224000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:25:51.347175 containerd[2508]: time="2025-12-16T03:25:51.346874089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:25:51.460388 sshd[6408]: Connection closed by 10.200.16.10 port 38052 Dec 16 03:25:51.461333 sshd-session[6404]: pam_unix(sshd:session): session closed for user core Dec 16 03:25:51.462000 audit[6404]: USER_END pid=6404 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:51.462000 audit[6404]: CRED_DISP pid=6404 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:51.466495 systemd-logind[2488]: Session 21 logged out. Waiting for processes to exit. Dec 16 03:25:51.466722 systemd[1]: sshd@17-10.200.8.23:22-10.200.16.10:38052.service: Deactivated successfully. Dec 16 03:25:51.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.8.23:22-10.200.16.10:38052 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:51.468807 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 03:25:51.471097 systemd-logind[2488]: Removed session 21. Dec 16 03:25:51.573569 systemd[1]: Started sshd@18-10.200.8.23:22-10.200.16.10:48564.service - OpenSSH per-connection server daemon (10.200.16.10:48564). Dec 16 03:25:51.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.23:22-10.200.16.10:48564 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:51.591676 containerd[2508]: time="2025-12-16T03:25:51.591640794Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:25:51.594650 containerd[2508]: time="2025-12-16T03:25:51.594599093Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:25:51.594826 containerd[2508]: time="2025-12-16T03:25:51.594736448Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:25:51.595213 kubelet[3995]: E1216 03:25:51.595105 3995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:25:51.595213 kubelet[3995]: E1216 03:25:51.595194 3995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:25:51.595621 kubelet[3995]: E1216 03:25:51.595573 3995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kqh2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-86474dbd54-65v57_calico-apiserver(928c764d-cf1a-4e24-874a-b4bd241b86e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:25:51.596789 kubelet[3995]: E1216 03:25:51.596749 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86474dbd54-65v57" podUID="928c764d-cf1a-4e24-874a-b4bd241b86e5" Dec 16 03:25:52.109000 audit[6437]: USER_ACCT pid=6437 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:52.110592 sshd[6437]: Accepted publickey for core from 10.200.16.10 port 48564 ssh2: RSA SHA256:tVTDUNW947aAkFL4niSGbJit7KfQLURL9mjv39l1lSw Dec 16 03:25:52.110000 audit[6437]: CRED_ACQ pid=6437 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:52.110000 audit[6437]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe3b2aef50 a2=3 a3=0 items=0 ppid=1 pid=6437 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:52.110000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:25:52.112364 sshd-session[6437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:25:52.117235 systemd-logind[2488]: New session 22 of user core. Dec 16 03:25:52.122296 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 03:25:52.123000 audit[6437]: USER_START pid=6437 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:52.124000 audit[6441]: CRED_ACQ pid=6441 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:52.350488 kubelet[3995]: E1216 03:25:52.350446 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-284xb" podUID="17fccc4a-a08c-4495-a01b-bad3cd3eab43" Dec 16 03:25:53.346166 containerd[2508]: time="2025-12-16T03:25:53.346111280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 03:25:53.451947 sshd[6441]: Connection closed by 10.200.16.10 port 48564 Dec 16 03:25:53.452512 sshd-session[6437]: pam_unix(sshd:session): session closed for user core Dec 16 03:25:53.452000 audit[6437]: USER_END pid=6437 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:53.453000 audit[6437]: CRED_DISP pid=6437 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:53.456452 systemd[1]: sshd@18-10.200.8.23:22-10.200.16.10:48564.service: Deactivated successfully. Dec 16 03:25:53.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.23:22-10.200.16.10:48564 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:53.458386 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 03:25:53.459689 systemd-logind[2488]: Session 22 logged out. Waiting for processes to exit. Dec 16 03:25:53.461589 systemd-logind[2488]: Removed session 22. Dec 16 03:25:53.564599 systemd[1]: Started sshd@19-10.200.8.23:22-10.200.16.10:48568.service - OpenSSH per-connection server daemon (10.200.16.10:48568). Dec 16 03:25:53.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.23:22-10.200.16.10:48568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:53.591232 containerd[2508]: time="2025-12-16T03:25:53.591108363Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:25:53.593700 containerd[2508]: time="2025-12-16T03:25:53.593664792Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 03:25:53.593760 containerd[2508]: time="2025-12-16T03:25:53.593748258Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 03:25:53.593965 kubelet[3995]: E1216 03:25:53.593937 3995 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:25:53.594266 kubelet[3995]: E1216 03:25:53.593978 3995 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:25:53.594266 kubelet[3995]: E1216 03:25:53.594210 3995 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fzdg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-85c7d9d48b-hc6qj_calico-system(e0164474-95e7-4b01-988d-4ae10762d8d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 03:25:53.596099 kubelet[3995]: E1216 03:25:53.596061 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85c7d9d48b-hc6qj" podUID="e0164474-95e7-4b01-988d-4ae10762d8d3" Dec 16 03:25:54.099000 audit[6451]: USER_ACCT pid=6451 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:54.100528 sshd[6451]: Accepted publickey for core from 10.200.16.10 port 48568 ssh2: RSA SHA256:tVTDUNW947aAkFL4niSGbJit7KfQLURL9mjv39l1lSw Dec 16 03:25:54.100000 audit[6451]: CRED_ACQ pid=6451 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:54.100000 audit[6451]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc77613b00 a2=3 a3=0 items=0 ppid=1 pid=6451 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:54.100000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:25:54.102252 sshd-session[6451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:25:54.108556 systemd-logind[2488]: New session 23 of user core. Dec 16 03:25:54.112345 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 03:25:54.113000 audit[6451]: USER_START pid=6451 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:54.115000 audit[6455]: CRED_ACQ pid=6455 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:54.456965 sshd[6455]: Connection closed by 10.200.16.10 port 48568 Dec 16 03:25:54.458036 sshd-session[6451]: pam_unix(sshd:session): session closed for user core Dec 16 03:25:54.460000 audit[6451]: USER_END pid=6451 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:54.460000 audit[6451]: CRED_DISP pid=6451 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:25:54.464070 systemd[1]: sshd@19-10.200.8.23:22-10.200.16.10:48568.service: Deactivated successfully. Dec 16 03:25:54.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.23:22-10.200.16.10:48568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:54.467568 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 03:25:54.470200 systemd-logind[2488]: Session 23 logged out. Waiting for processes to exit. Dec 16 03:25:54.472669 systemd-logind[2488]: Removed session 23. Dec 16 03:25:56.346559 kubelet[3995]: E1216 03:25:56.346453 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69c4bb98b9-88qzw" podUID="453a3c95-d107-4f4e-b7f5-ee250655b168" Dec 16 03:25:58.348846 kubelet[3995]: E1216 03:25:58.348722 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-srg9b" podUID="52f35797-5a94-4b5f-8ac7-147ca2758736" Dec 16 03:25:59.347658 kubelet[3995]: E1216 03:25:59.347612 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd5b56957-fm9l2" podUID="08adb93e-a5f4-4e36-9d73-5c61441c3142" Dec 16 03:25:59.571372 systemd[1]: Started sshd@20-10.200.8.23:22-10.200.16.10:48574.service - OpenSSH per-connection server daemon (10.200.16.10:48574). Dec 16 03:25:59.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.23:22-10.200.16.10:48574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:59.572441 kernel: kauditd_printk_skb: 27 callbacks suppressed Dec 16 03:25:59.572470 kernel: audit: type=1130 audit(1765855559.571:904): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.23:22-10.200.16.10:48574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:25:59.674000 audit[6471]: NETFILTER_CFG table=filter:158 family=2 entries=26 op=nft_register_rule pid=6471 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:25:59.679261 kernel: audit: type=1325 audit(1765855559.674:905): table=filter:158 family=2 entries=26 op=nft_register_rule pid=6471 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:25:59.674000 audit[6471]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe8c3aa490 a2=0 a3=7ffe8c3aa47c items=0 ppid=4100 pid=6471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:59.685163 kernel: audit: type=1300 audit(1765855559.674:905): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe8c3aa490 a2=0 a3=7ffe8c3aa47c items=0 ppid=4100 pid=6471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:59.674000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:25:59.702010 kernel: audit: type=1327 audit(1765855559.674:905): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:25:59.702089 kernel: audit: type=1325 audit(1765855559.683:906): table=nat:159 family=2 entries=104 op=nft_register_chain pid=6471 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:25:59.683000 audit[6471]: NETFILTER_CFG table=nat:159 family=2 entries=104 op=nft_register_chain pid=6471 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:25:59.683000 audit[6471]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffe8c3aa490 a2=0 a3=7ffe8c3aa47c items=0 ppid=4100 pid=6471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:59.683000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:25:59.720479 kernel: audit: type=1300 audit(1765855559.683:906): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffe8c3aa490 a2=0 a3=7ffe8c3aa47c items=0 ppid=4100 pid=6471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:25:59.720547 kernel: audit: type=1327 audit(1765855559.683:906): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:26:00.121000 audit[6467]: USER_ACCT pid=6467 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:00.121689 sshd[6467]: Accepted publickey for core from 10.200.16.10 port 48574 ssh2: RSA SHA256:tVTDUNW947aAkFL4niSGbJit7KfQLURL9mjv39l1lSw Dec 16 03:26:00.123955 sshd-session[6467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:26:00.122000 audit[6467]: CRED_ACQ pid=6467 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:00.129372 systemd-logind[2488]: New session 24 of user core. Dec 16 03:26:00.134309 kernel: audit: type=1101 audit(1765855560.121:907): pid=6467 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:00.134474 kernel: audit: type=1103 audit(1765855560.122:908): pid=6467 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:00.136333 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 03:26:00.139311 kernel: audit: type=1006 audit(1765855560.123:909): pid=6467 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 16 03:26:00.123000 audit[6467]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd9a516d0 a2=3 a3=0 items=0 ppid=1 pid=6467 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:00.123000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:26:00.140000 audit[6467]: USER_START pid=6467 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:00.142000 audit[6473]: CRED_ACQ pid=6473 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:00.852660 sshd[6473]: Connection closed by 10.200.16.10 port 48574 Dec 16 03:26:00.854392 sshd-session[6467]: pam_unix(sshd:session): session closed for user core Dec 16 03:26:00.855000 audit[6467]: USER_END pid=6467 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:00.855000 audit[6467]: CRED_DISP pid=6467 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:00.859209 systemd[1]: sshd@20-10.200.8.23:22-10.200.16.10:48574.service: Deactivated successfully. Dec 16 03:26:00.859575 systemd-logind[2488]: Session 24 logged out. Waiting for processes to exit. Dec 16 03:26:00.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.23:22-10.200.16.10:48574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:26:00.862869 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 03:26:00.866406 systemd-logind[2488]: Removed session 24. Dec 16 03:26:04.350193 kubelet[3995]: E1216 03:26:04.349453 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85c7d9d48b-hc6qj" podUID="e0164474-95e7-4b01-988d-4ae10762d8d3" Dec 16 03:26:04.350193 kubelet[3995]: E1216 03:26:04.349912 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86474dbd54-fphkv" podUID="b0a716ce-6354-47ff-896b-1da783a25f3a" Dec 16 03:26:05.346990 kubelet[3995]: E1216 03:26:05.346630 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86474dbd54-65v57" podUID="928c764d-cf1a-4e24-874a-b4bd241b86e5" Dec 16 03:26:05.968889 systemd[1]: Started sshd@21-10.200.8.23:22-10.200.16.10:51530.service - OpenSSH per-connection server daemon (10.200.16.10:51530). Dec 16 03:26:05.975746 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 03:26:05.975780 kernel: audit: type=1130 audit(1765855565.967:915): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.23:22-10.200.16.10:51530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:26:05.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.23:22-10.200.16.10:51530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:26:06.516000 audit[6485]: USER_ACCT pid=6485 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:06.526162 kernel: audit: type=1101 audit(1765855566.516:916): pid=6485 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:06.527067 sshd[6485]: Accepted publickey for core from 10.200.16.10 port 51530 ssh2: RSA SHA256:tVTDUNW947aAkFL4niSGbJit7KfQLURL9mjv39l1lSw Dec 16 03:26:06.528365 sshd-session[6485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:26:06.526000 audit[6485]: CRED_ACQ pid=6485 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:06.539160 kernel: audit: type=1103 audit(1765855566.526:917): pid=6485 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:06.544158 kernel: audit: type=1006 audit(1765855566.526:918): pid=6485 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 16 03:26:06.547631 systemd-logind[2488]: New session 25 of user core. Dec 16 03:26:06.526000 audit[6485]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffffc966ba0 a2=3 a3=0 items=0 ppid=1 pid=6485 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:06.556055 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 03:26:06.556203 kernel: audit: type=1300 audit(1765855566.526:918): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffffc966ba0 a2=3 a3=0 items=0 ppid=1 pid=6485 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:06.526000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:26:06.563163 kernel: audit: type=1327 audit(1765855566.526:918): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:26:06.557000 audit[6485]: USER_START pid=6485 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:06.562000 audit[6489]: CRED_ACQ pid=6489 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:06.577843 kernel: audit: type=1105 audit(1765855566.557:919): pid=6485 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:06.577911 kernel: audit: type=1103 audit(1765855566.562:920): pid=6489 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:07.007182 sshd[6489]: Connection closed by 10.200.16.10 port 51530 Dec 16 03:26:07.007736 sshd-session[6485]: pam_unix(sshd:session): session closed for user core Dec 16 03:26:07.009000 audit[6485]: USER_END pid=6485 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:07.013490 systemd[1]: sshd@21-10.200.8.23:22-10.200.16.10:51530.service: Deactivated successfully. Dec 16 03:26:07.016028 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 03:26:07.019496 systemd-logind[2488]: Session 25 logged out. Waiting for processes to exit. Dec 16 03:26:07.009000 audit[6485]: CRED_DISP pid=6485 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:07.020631 systemd-logind[2488]: Removed session 25. Dec 16 03:26:07.023556 kernel: audit: type=1106 audit(1765855567.009:921): pid=6485 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:07.023612 kernel: audit: type=1104 audit(1765855567.009:922): pid=6485 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:07.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.23:22-10.200.16.10:51530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:26:07.347173 kubelet[3995]: E1216 03:26:07.347090 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-284xb" podUID="17fccc4a-a08c-4495-a01b-bad3cd3eab43" Dec 16 03:26:09.349494 kubelet[3995]: E1216 03:26:09.349387 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69c4bb98b9-88qzw" podUID="453a3c95-d107-4f4e-b7f5-ee250655b168" Dec 16 03:26:09.351026 kubelet[3995]: E1216 03:26:09.350459 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-srg9b" podUID="52f35797-5a94-4b5f-8ac7-147ca2758736" Dec 16 03:26:11.348236 kubelet[3995]: E1216 03:26:11.348079 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd5b56957-fm9l2" podUID="08adb93e-a5f4-4e36-9d73-5c61441c3142" Dec 16 03:26:12.116470 systemd[1]: Started sshd@22-10.200.8.23:22-10.200.16.10:49476.service - OpenSSH per-connection server daemon (10.200.16.10:49476). Dec 16 03:26:12.123823 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:26:12.125221 kernel: audit: type=1130 audit(1765855572.115:924): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.23:22-10.200.16.10:49476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:26:12.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.23:22-10.200.16.10:49476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:26:12.656000 audit[6527]: USER_ACCT pid=6527 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:12.658057 sshd[6527]: Accepted publickey for core from 10.200.16.10 port 49476 ssh2: RSA SHA256:tVTDUNW947aAkFL4niSGbJit7KfQLURL9mjv39l1lSw Dec 16 03:26:12.662626 sshd-session[6527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:26:12.664054 kernel: audit: type=1101 audit(1765855572.656:925): pid=6527 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:12.664366 kernel: audit: type=1103 audit(1765855572.660:926): pid=6527 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:12.660000 audit[6527]: CRED_ACQ pid=6527 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:12.669962 kernel: audit: type=1006 audit(1765855572.660:927): pid=6527 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 16 03:26:12.660000 audit[6527]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1122b050 a2=3 a3=0 items=0 ppid=1 pid=6527 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:12.674840 kernel: audit: type=1300 audit(1765855572.660:927): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1122b050 a2=3 a3=0 items=0 ppid=1 pid=6527 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:12.678762 kernel: audit: type=1327 audit(1765855572.660:927): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:26:12.660000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:26:12.678515 systemd-logind[2488]: New session 26 of user core. Dec 16 03:26:12.679329 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 03:26:12.680000 audit[6527]: USER_START pid=6527 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:12.686000 audit[6531]: CRED_ACQ pid=6531 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:12.693317 kernel: audit: type=1105 audit(1765855572.680:928): pid=6527 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:12.693361 kernel: audit: type=1103 audit(1765855572.686:929): pid=6531 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:13.063241 sshd[6531]: Connection closed by 10.200.16.10 port 49476 Dec 16 03:26:13.064311 sshd-session[6527]: pam_unix(sshd:session): session closed for user core Dec 16 03:26:13.065000 audit[6527]: USER_END pid=6527 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:13.070358 systemd[1]: sshd@22-10.200.8.23:22-10.200.16.10:49476.service: Deactivated successfully. Dec 16 03:26:13.072996 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 03:26:13.065000 audit[6527]: CRED_DISP pid=6527 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:13.077415 systemd-logind[2488]: Session 26 logged out. Waiting for processes to exit. Dec 16 03:26:13.078980 systemd-logind[2488]: Removed session 26. Dec 16 03:26:13.082144 kernel: audit: type=1106 audit(1765855573.065:930): pid=6527 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:13.082213 kernel: audit: type=1104 audit(1765855573.065:931): pid=6527 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:13.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.23:22-10.200.16.10:49476 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:26:16.348162 kubelet[3995]: E1216 03:26:16.347646 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86474dbd54-65v57" podUID="928c764d-cf1a-4e24-874a-b4bd241b86e5" Dec 16 03:26:17.346470 kubelet[3995]: E1216 03:26:17.346415 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85c7d9d48b-hc6qj" podUID="e0164474-95e7-4b01-988d-4ae10762d8d3" Dec 16 03:26:18.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.23:22-10.200.16.10:49492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:26:18.187889 systemd[1]: Started sshd@23-10.200.8.23:22-10.200.16.10:49492.service - OpenSSH per-connection server daemon (10.200.16.10:49492). Dec 16 03:26:18.189565 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:26:18.189597 kernel: audit: type=1130 audit(1765855578.186:933): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.23:22-10.200.16.10:49492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:26:18.739968 sshd[6542]: Accepted publickey for core from 10.200.16.10 port 49492 ssh2: RSA SHA256:tVTDUNW947aAkFL4niSGbJit7KfQLURL9mjv39l1lSw Dec 16 03:26:18.738000 audit[6542]: USER_ACCT pid=6542 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:18.749212 kernel: audit: type=1101 audit(1765855578.738:934): pid=6542 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:18.749831 sshd-session[6542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:26:18.747000 audit[6542]: CRED_ACQ pid=6542 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:18.763169 kernel: audit: type=1103 audit(1765855578.747:935): pid=6542 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:18.769390 systemd-logind[2488]: New session 27 of user core. Dec 16 03:26:18.772178 kernel: audit: type=1006 audit(1765855578.747:936): pid=6542 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Dec 16 03:26:18.747000 audit[6542]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff85834530 a2=3 a3=0 items=0 ppid=1 pid=6542 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:18.782172 kernel: audit: type=1300 audit(1765855578.747:936): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff85834530 a2=3 a3=0 items=0 ppid=1 pid=6542 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:18.783268 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 16 03:26:18.747000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:26:18.789192 kernel: audit: type=1327 audit(1765855578.747:936): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:26:18.789000 audit[6542]: USER_START pid=6542 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:18.800205 kernel: audit: type=1105 audit(1765855578.789:937): pid=6542 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:18.799000 audit[6546]: CRED_ACQ pid=6546 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:18.814648 kernel: audit: type=1103 audit(1765855578.799:938): pid=6546 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:19.207026 sshd[6546]: Connection closed by 10.200.16.10 port 49492 Dec 16 03:26:19.207589 sshd-session[6542]: pam_unix(sshd:session): session closed for user core Dec 16 03:26:19.208000 audit[6542]: USER_END pid=6542 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:19.213176 systemd[1]: sshd@23-10.200.8.23:22-10.200.16.10:49492.service: Deactivated successfully. Dec 16 03:26:19.215806 systemd[1]: session-27.scope: Deactivated successfully. Dec 16 03:26:19.218073 systemd-logind[2488]: Session 27 logged out. Waiting for processes to exit. Dec 16 03:26:19.208000 audit[6542]: CRED_DISP pid=6542 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:19.220083 systemd-logind[2488]: Removed session 27. Dec 16 03:26:19.223699 kernel: audit: type=1106 audit(1765855579.208:939): pid=6542 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:19.223756 kernel: audit: type=1104 audit(1765855579.208:940): pid=6542 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:19.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.23:22-10.200.16.10:49492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:26:19.346031 kubelet[3995]: E1216 03:26:19.345981 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86474dbd54-fphkv" podUID="b0a716ce-6354-47ff-896b-1da783a25f3a" Dec 16 03:26:20.350475 kubelet[3995]: E1216 03:26:20.350412 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-srg9b" podUID="52f35797-5a94-4b5f-8ac7-147ca2758736" Dec 16 03:26:21.346494 kubelet[3995]: E1216 03:26:21.346167 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-284xb" podUID="17fccc4a-a08c-4495-a01b-bad3cd3eab43" Dec 16 03:26:22.352568 kubelet[3995]: E1216 03:26:22.351808 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69c4bb98b9-88qzw" podUID="453a3c95-d107-4f4e-b7f5-ee250655b168" Dec 16 03:26:23.347057 kubelet[3995]: E1216 03:26:23.347010 3995 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fd5b56957-fm9l2" podUID="08adb93e-a5f4-4e36-9d73-5c61441c3142" Dec 16 03:26:24.325655 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:26:24.325772 kernel: audit: type=1130 audit(1765855584.320:942): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.8.23:22-10.200.16.10:35506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:26:24.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.8.23:22-10.200.16.10:35506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:26:24.321948 systemd[1]: Started sshd@24-10.200.8.23:22-10.200.16.10:35506.service - OpenSSH per-connection server daemon (10.200.16.10:35506). Dec 16 03:26:24.882000 audit[6560]: USER_ACCT pid=6560 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:24.888744 sshd-session[6560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:26:24.889831 sshd[6560]: Accepted publickey for core from 10.200.16.10 port 35506 ssh2: RSA SHA256:tVTDUNW947aAkFL4niSGbJit7KfQLURL9mjv39l1lSw Dec 16 03:26:24.890373 kernel: audit: type=1101 audit(1765855584.882:943): pid=6560 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:24.887000 audit[6560]: CRED_ACQ pid=6560 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:24.897535 kernel: audit: type=1103 audit(1765855584.887:944): pid=6560 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:24.897594 kernel: audit: type=1006 audit(1765855584.887:945): pid=6560 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Dec 16 03:26:24.887000 audit[6560]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd90feaf0 a2=3 a3=0 items=0 ppid=1 pid=6560 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:24.902196 kernel: audit: type=1300 audit(1765855584.887:945): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd90feaf0 a2=3 a3=0 items=0 ppid=1 pid=6560 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:26:24.905600 kernel: audit: type=1327 audit(1765855584.887:945): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:26:24.887000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:26:24.904454 systemd-logind[2488]: New session 28 of user core. Dec 16 03:26:24.914314 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 16 03:26:24.915000 audit[6560]: USER_START pid=6560 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:24.924176 kernel: audit: type=1105 audit(1765855584.915:946): pid=6560 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:24.923000 audit[6564]: CRED_ACQ pid=6564 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:24.930161 kernel: audit: type=1103 audit(1765855584.923:947): pid=6564 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:25.249715 sshd[6564]: Connection closed by 10.200.16.10 port 35506 Dec 16 03:26:25.250484 sshd-session[6560]: pam_unix(sshd:session): session closed for user core Dec 16 03:26:25.250000 audit[6560]: USER_END pid=6560 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:25.257277 systemd[1]: sshd@24-10.200.8.23:22-10.200.16.10:35506.service: Deactivated successfully. Dec 16 03:26:25.250000 audit[6560]: CRED_DISP pid=6560 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:25.261737 kernel: audit: type=1106 audit(1765855585.250:948): pid=6560 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:25.261803 kernel: audit: type=1104 audit(1765855585.250:949): pid=6560 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Dec 16 03:26:25.262568 systemd[1]: session-28.scope: Deactivated successfully. Dec 16 03:26:25.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.8.23:22-10.200.16.10:35506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:26:25.265427 systemd-logind[2488]: Session 28 logged out. Waiting for processes to exit. Dec 16 03:26:25.266211 systemd-logind[2488]: Removed session 28.