Jan 20 06:46:15.761274 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 20 04:11:16 -00 2026 Jan 20 06:46:15.761297 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a6870adf74cfcb2bcf8e795f60488409634fe2cf3647ef4cd59c8df5545d99c0 Jan 20 06:46:15.761309 kernel: BIOS-provided physical RAM map: Jan 20 06:46:15.761316 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 20 06:46:15.761323 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 20 06:46:15.761330 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Jan 20 06:46:15.761339 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Jan 20 06:46:15.761347 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Jan 20 06:46:15.761354 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Jan 20 06:46:15.761363 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 20 06:46:15.761371 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 20 06:46:15.761379 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 20 06:46:15.761386 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 20 06:46:15.761394 kernel: printk: legacy bootconsole [earlyser0] enabled Jan 20 06:46:15.761408 kernel: NX (Execute Disable) protection: active Jan 20 06:46:15.761417 kernel: APIC: Static calls initialized Jan 20 06:46:15.761424 kernel: efi: EFI v2.7 by Microsoft Jan 20 06:46:15.761432 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3eaa2018 RNG=0x3ffd2018 Jan 20 06:46:15.761440 kernel: random: crng init done Jan 20 06:46:15.761448 kernel: secureboot: Secure boot disabled Jan 20 06:46:15.761455 kernel: SMBIOS 3.1.0 present. Jan 20 06:46:15.761463 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/25/2025 Jan 20 06:46:15.761471 kernel: DMI: Memory slots populated: 2/2 Jan 20 06:46:15.761479 kernel: Hypervisor detected: Microsoft Hyper-V Jan 20 06:46:15.761487 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Jan 20 06:46:15.761497 kernel: Hyper-V: Nested features: 0x3e0101 Jan 20 06:46:15.761505 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 20 06:46:15.761513 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 20 06:46:15.761521 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 20 06:46:15.761528 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 20 06:46:15.761536 kernel: tsc: Detected 2300.000 MHz processor Jan 20 06:46:15.761543 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 06:46:15.761553 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 06:46:15.761562 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Jan 20 06:46:15.761572 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 20 06:46:15.761581 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 06:46:15.761590 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Jan 20 06:46:15.761598 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Jan 20 06:46:15.761604 kernel: Using GB pages for direct mapping Jan 20 06:46:15.761612 kernel: ACPI: Early table checksum verification disabled Jan 20 06:46:15.761624 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 20 06:46:15.761632 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 06:46:15.761640 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 06:46:15.761648 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 20 06:46:15.761657 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 20 06:46:15.761665 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 06:46:15.761673 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 06:46:15.761681 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 06:46:15.761690 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Jan 20 06:46:15.761698 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Jan 20 06:46:15.761707 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 06:46:15.761716 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 20 06:46:15.761725 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Jan 20 06:46:15.761734 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 20 06:46:15.761742 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 20 06:46:15.761751 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 20 06:46:15.761759 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 20 06:46:15.761767 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 20 06:46:15.761775 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Jan 20 06:46:15.761784 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 20 06:46:15.761793 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 20 06:46:15.761802 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Jan 20 06:46:15.761811 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Jan 20 06:46:15.761821 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Jan 20 06:46:15.761829 kernel: Zone ranges: Jan 20 06:46:15.761837 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 06:46:15.761847 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 20 06:46:15.761855 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 20 06:46:15.761863 kernel: Device empty Jan 20 06:46:15.761872 kernel: Movable zone start for each node Jan 20 06:46:15.761880 kernel: Early memory node ranges Jan 20 06:46:15.761888 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 20 06:46:15.761896 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Jan 20 06:46:15.761905 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Jan 20 06:46:15.761913 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 20 06:46:15.761921 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 20 06:46:15.761928 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 20 06:46:15.761936 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 06:46:15.761943 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 20 06:46:15.761953 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 20 06:46:15.761962 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Jan 20 06:46:15.761969 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 20 06:46:15.761976 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 20 06:46:15.761983 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 06:46:15.761991 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 06:46:15.761999 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 06:46:15.762007 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 20 06:46:15.762014 kernel: TSC deadline timer available Jan 20 06:46:15.762037 kernel: CPU topo: Max. logical packages: 1 Jan 20 06:46:15.762048 kernel: CPU topo: Max. logical dies: 1 Jan 20 06:46:15.762054 kernel: CPU topo: Max. dies per package: 1 Jan 20 06:46:15.762062 kernel: CPU topo: Max. threads per core: 2 Jan 20 06:46:15.762068 kernel: CPU topo: Num. cores per package: 1 Jan 20 06:46:15.762075 kernel: CPU topo: Num. threads per package: 2 Jan 20 06:46:15.762083 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 20 06:46:15.762092 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 20 06:46:15.762099 kernel: Booting paravirtualized kernel on Hyper-V Jan 20 06:46:15.762107 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 06:46:15.762115 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 20 06:46:15.762122 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 20 06:46:15.762130 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 20 06:46:15.762138 kernel: pcpu-alloc: [0] 0 1 Jan 20 06:46:15.762146 kernel: Hyper-V: PV spinlocks enabled Jan 20 06:46:15.762153 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 06:46:15.762162 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a6870adf74cfcb2bcf8e795f60488409634fe2cf3647ef4cd59c8df5545d99c0 Jan 20 06:46:15.762170 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 20 06:46:15.762177 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 06:46:15.762185 kernel: Fallback order for Node 0: 0 Jan 20 06:46:15.762195 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Jan 20 06:46:15.762202 kernel: Policy zone: Normal Jan 20 06:46:15.762270 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 06:46:15.762278 kernel: software IO TLB: area num 2. Jan 20 06:46:15.762285 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 20 06:46:15.762293 kernel: ftrace: allocating 40128 entries in 157 pages Jan 20 06:46:15.762300 kernel: ftrace: allocated 157 pages with 5 groups Jan 20 06:46:15.762307 kernel: Dynamic Preempt: voluntary Jan 20 06:46:15.762316 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 06:46:15.762325 kernel: rcu: RCU event tracing is enabled. Jan 20 06:46:15.762339 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 20 06:46:15.762349 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 06:46:15.762357 kernel: Rude variant of Tasks RCU enabled. Jan 20 06:46:15.762365 kernel: Tracing variant of Tasks RCU enabled. Jan 20 06:46:15.762372 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 06:46:15.762380 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 20 06:46:15.762389 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 20 06:46:15.762399 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 20 06:46:15.762407 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 20 06:46:15.762416 kernel: Using NULL legacy PIC Jan 20 06:46:15.762423 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 20 06:46:15.762432 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 06:46:15.762440 kernel: Console: colour dummy device 80x25 Jan 20 06:46:15.762448 kernel: printk: legacy console [tty1] enabled Jan 20 06:46:15.762456 kernel: printk: legacy console [ttyS0] enabled Jan 20 06:46:15.762464 kernel: printk: legacy bootconsole [earlyser0] disabled Jan 20 06:46:15.762473 kernel: ACPI: Core revision 20240827 Jan 20 06:46:15.762481 kernel: Failed to register legacy timer interrupt Jan 20 06:46:15.762490 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 06:46:15.762498 kernel: x2apic enabled Jan 20 06:46:15.762505 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 06:46:15.762512 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 20 06:46:15.762521 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 20 06:46:15.762529 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Jan 20 06:46:15.762538 kernel: Hyper-V: Using IPI hypercalls Jan 20 06:46:15.762548 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 20 06:46:15.762555 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 20 06:46:15.762563 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 20 06:46:15.762571 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 20 06:46:15.762578 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 20 06:46:15.762587 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 20 06:46:15.762595 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jan 20 06:46:15.762605 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4600.00 BogoMIPS (lpj=2300000) Jan 20 06:46:15.762614 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 20 06:46:15.762621 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 20 06:46:15.762629 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 20 06:46:15.762636 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 06:46:15.762643 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 06:46:15.762651 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 06:46:15.762659 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 20 06:46:15.762669 kernel: RETBleed: Vulnerable Jan 20 06:46:15.762676 kernel: Speculative Store Bypass: Vulnerable Jan 20 06:46:15.762684 kernel: active return thunk: its_return_thunk Jan 20 06:46:15.762691 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 20 06:46:15.762698 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 06:46:15.762705 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 06:46:15.762713 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 06:46:15.762721 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 20 06:46:15.762729 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 20 06:46:15.762737 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 20 06:46:15.762746 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Jan 20 06:46:15.762753 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Jan 20 06:46:15.762761 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Jan 20 06:46:15.762768 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 06:46:15.762775 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 20 06:46:15.762783 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 20 06:46:15.762790 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 20 06:46:15.762798 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Jan 20 06:46:15.762806 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Jan 20 06:46:15.762814 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Jan 20 06:46:15.762821 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Jan 20 06:46:15.762830 kernel: Freeing SMP alternatives memory: 32K Jan 20 06:46:15.762837 kernel: pid_max: default: 32768 minimum: 301 Jan 20 06:46:15.762845 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 20 06:46:15.762853 kernel: landlock: Up and running. Jan 20 06:46:15.762860 kernel: SELinux: Initializing. Jan 20 06:46:15.762869 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 20 06:46:15.762876 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 20 06:46:15.762884 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Jan 20 06:46:15.762892 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Jan 20 06:46:15.762900 kernel: signal: max sigframe size: 11952 Jan 20 06:46:15.762909 kernel: rcu: Hierarchical SRCU implementation. Jan 20 06:46:15.762917 kernel: rcu: Max phase no-delay instances is 400. Jan 20 06:46:15.762926 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 20 06:46:15.762934 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 20 06:46:15.762943 kernel: smp: Bringing up secondary CPUs ... Jan 20 06:46:15.762950 kernel: smpboot: x86: Booting SMP configuration: Jan 20 06:46:15.762958 kernel: .... node #0, CPUs: #1 Jan 20 06:46:15.762966 kernel: smp: Brought up 1 node, 2 CPUs Jan 20 06:46:15.762975 kernel: smpboot: Total of 2 processors activated (9200.00 BogoMIPS) Jan 20 06:46:15.762983 kernel: Memory: 8093408K/8383228K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15536K init, 2500K bss, 283604K reserved, 0K cma-reserved) Jan 20 06:46:15.762992 kernel: devtmpfs: initialized Jan 20 06:46:15.763000 kernel: x86/mm: Memory block size: 128MB Jan 20 06:46:15.763009 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 20 06:46:15.763017 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 06:46:15.763025 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 20 06:46:15.763034 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 06:46:15.763041 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 06:46:15.763049 kernel: audit: initializing netlink subsys (disabled) Jan 20 06:46:15.763058 kernel: audit: type=2000 audit(1768891570.071:1): state=initialized audit_enabled=0 res=1 Jan 20 06:46:15.763066 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 06:46:15.763074 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 06:46:15.763082 kernel: cpuidle: using governor menu Jan 20 06:46:15.763091 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 06:46:15.763099 kernel: dca service started, version 1.12.1 Jan 20 06:46:15.763106 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Jan 20 06:46:15.763114 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Jan 20 06:46:15.763123 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 06:46:15.763131 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 06:46:15.763140 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 06:46:15.763149 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 06:46:15.763157 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 06:46:15.763165 kernel: ACPI: Added _OSI(Module Device) Jan 20 06:46:15.763173 kernel: ACPI: Added _OSI(Processor Device) Jan 20 06:46:15.763181 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 06:46:15.763189 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 06:46:15.763198 kernel: ACPI: Interpreter enabled Jan 20 06:46:15.763216 kernel: ACPI: PM: (supports S0 S5) Jan 20 06:46:15.763225 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 06:46:15.763233 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 06:46:15.763241 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 20 06:46:15.763249 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 20 06:46:15.763257 kernel: iommu: Default domain type: Translated Jan 20 06:46:15.763265 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 06:46:15.763275 kernel: efivars: Registered efivars operations Jan 20 06:46:15.763283 kernel: PCI: Using ACPI for IRQ routing Jan 20 06:46:15.763291 kernel: PCI: System does not support PCI Jan 20 06:46:15.763299 kernel: vgaarb: loaded Jan 20 06:46:15.763306 kernel: clocksource: Switched to clocksource tsc-early Jan 20 06:46:15.763314 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 06:46:15.763322 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 06:46:15.763332 kernel: pnp: PnP ACPI init Jan 20 06:46:15.763340 kernel: pnp: PnP ACPI: found 3 devices Jan 20 06:46:15.763349 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 06:46:15.763356 kernel: NET: Registered PF_INET protocol family Jan 20 06:46:15.763364 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 20 06:46:15.763372 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 20 06:46:15.763380 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 06:46:15.763390 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 06:46:15.763398 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 20 06:46:15.763407 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 20 06:46:15.763415 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 20 06:46:15.763422 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 20 06:46:15.763430 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 06:46:15.763438 kernel: NET: Registered PF_XDP protocol family Jan 20 06:46:15.763448 kernel: PCI: CLS 0 bytes, default 64 Jan 20 06:46:15.763456 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 20 06:46:15.763464 kernel: software IO TLB: mapped [mem 0x000000003a9ad000-0x000000003e9ad000] (64MB) Jan 20 06:46:15.763473 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Jan 20 06:46:15.763481 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Jan 20 06:46:15.763489 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jan 20 06:46:15.763496 kernel: clocksource: Switched to clocksource tsc Jan 20 06:46:15.763506 kernel: Initialise system trusted keyrings Jan 20 06:46:15.763514 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 20 06:46:15.763522 kernel: Key type asymmetric registered Jan 20 06:46:15.763530 kernel: Asymmetric key parser 'x509' registered Jan 20 06:46:15.763539 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 20 06:46:15.763547 kernel: io scheduler mq-deadline registered Jan 20 06:46:15.763555 kernel: io scheduler kyber registered Jan 20 06:46:15.763564 kernel: io scheduler bfq registered Jan 20 06:46:15.763571 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 06:46:15.763579 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 06:46:15.763588 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 06:46:15.763596 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 20 06:46:15.763604 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 06:46:15.763613 kernel: i8042: PNP: No PS/2 controller found. Jan 20 06:46:15.763742 kernel: rtc_cmos 00:02: registered as rtc0 Jan 20 06:46:15.763833 kernel: rtc_cmos 00:02: setting system clock to 2026-01-20T06:46:12 UTC (1768891572) Jan 20 06:46:15.763925 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 20 06:46:15.763936 kernel: intel_pstate: Intel P-state driver initializing Jan 20 06:46:15.763945 kernel: efifb: probing for efifb Jan 20 06:46:15.763953 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 20 06:46:15.763962 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 20 06:46:15.763970 kernel: efifb: scrolling: redraw Jan 20 06:46:15.763978 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 20 06:46:15.763986 kernel: Console: switching to colour frame buffer device 128x48 Jan 20 06:46:15.763995 kernel: fb0: EFI VGA frame buffer device Jan 20 06:46:15.764003 kernel: pstore: Using crash dump compression: deflate Jan 20 06:46:15.764011 kernel: pstore: Registered efi_pstore as persistent store backend Jan 20 06:46:15.764020 kernel: NET: Registered PF_INET6 protocol family Jan 20 06:46:15.764029 kernel: Segment Routing with IPv6 Jan 20 06:46:15.764036 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 06:46:15.764044 kernel: NET: Registered PF_PACKET protocol family Jan 20 06:46:15.764052 kernel: Key type dns_resolver registered Jan 20 06:46:15.764060 kernel: IPI shorthand broadcast: enabled Jan 20 06:46:15.764068 kernel: sched_clock: Marking stable (1721240944, 81909336)->(2080799917, -277649637) Jan 20 06:46:15.764077 kernel: registered taskstats version 1 Jan 20 06:46:15.764086 kernel: Loading compiled-in X.509 certificates Jan 20 06:46:15.764094 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 3e9049adf8f1d71dd06c731465288f6e1d353052' Jan 20 06:46:15.764102 kernel: Demotion targets for Node 0: null Jan 20 06:46:15.764110 kernel: Key type .fscrypt registered Jan 20 06:46:15.764117 kernel: Key type fscrypt-provisioning registered Jan 20 06:46:15.764126 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 06:46:15.764134 kernel: ima: Allocated hash algorithm: sha1 Jan 20 06:46:15.764144 kernel: ima: No architecture policies found Jan 20 06:46:15.764152 kernel: clk: Disabling unused clocks Jan 20 06:46:15.764159 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 20 06:46:15.764167 kernel: Write protecting the kernel read-only data: 47104k Jan 20 06:46:15.764175 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 20 06:46:15.764182 kernel: Run /init as init process Jan 20 06:46:15.764191 kernel: with arguments: Jan 20 06:46:15.764200 kernel: /init Jan 20 06:46:15.764218 kernel: with environment: Jan 20 06:46:15.764226 kernel: HOME=/ Jan 20 06:46:15.764234 kernel: TERM=linux Jan 20 06:46:15.764242 kernel: hv_vmbus: Vmbus version:5.3 Jan 20 06:46:15.764249 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 20 06:46:15.764258 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 20 06:46:15.764266 kernel: PTP clock support registered Jan 20 06:46:15.764276 kernel: hv_utils: Registering HyperV Utility Driver Jan 20 06:46:15.764284 kernel: hv_vmbus: registering driver hv_utils Jan 20 06:46:15.764292 kernel: hv_utils: Shutdown IC version 3.2 Jan 20 06:46:15.764299 kernel: hv_utils: Heartbeat IC version 3.0 Jan 20 06:46:15.764307 kernel: hv_utils: TimeSync IC version 4.0 Jan 20 06:46:15.764314 kernel: SCSI subsystem initialized Jan 20 06:46:15.764322 kernel: hv_vmbus: registering driver hv_pci Jan 20 06:46:15.764441 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Jan 20 06:46:15.764540 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Jan 20 06:46:15.764650 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Jan 20 06:46:15.764747 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Jan 20 06:46:15.764863 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Jan 20 06:46:15.764973 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Jan 20 06:46:15.765073 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Jan 20 06:46:15.765178 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Jan 20 06:46:15.765187 kernel: hv_vmbus: registering driver hv_storvsc Jan 20 06:46:15.765307 kernel: scsi host0: storvsc_host_t Jan 20 06:46:15.765424 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 20 06:46:15.765435 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 20 06:46:15.765443 kernel: hv_vmbus: registering driver hid_hyperv Jan 20 06:46:15.765451 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 20 06:46:15.765548 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 20 06:46:15.765559 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 20 06:46:15.765570 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 20 06:46:15.765656 kernel: nvme nvme0: pci function c05b:00:00.0 Jan 20 06:46:15.765766 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Jan 20 06:46:15.765845 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 20 06:46:15.765855 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 20 06:46:15.765959 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 20 06:46:15.765969 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 06:46:15.766131 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 20 06:46:15.766149 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 06:46:15.766158 kernel: device-mapper: uevent: version 1.0.3 Jan 20 06:46:15.766168 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 20 06:46:15.766177 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 20 06:46:15.766200 kernel: raid6: avx512x4 gen() 47979 MB/s Jan 20 06:46:15.766227 kernel: raid6: avx512x2 gen() 46457 MB/s Jan 20 06:46:15.766236 kernel: raid6: avx512x1 gen() 30336 MB/s Jan 20 06:46:15.766245 kernel: raid6: avx2x4 gen() 41891 MB/s Jan 20 06:46:15.766254 kernel: raid6: avx2x2 gen() 43630 MB/s Jan 20 06:46:15.766263 kernel: raid6: avx2x1 gen() 32540 MB/s Jan 20 06:46:15.766272 kernel: raid6: using algorithm avx512x4 gen() 47979 MB/s Jan 20 06:46:15.766284 kernel: raid6: .... xor() 7815 MB/s, rmw enabled Jan 20 06:46:15.766293 kernel: raid6: using avx512x2 recovery algorithm Jan 20 06:46:15.766302 kernel: xor: automatically using best checksumming function avx Jan 20 06:46:15.766311 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 06:46:15.766320 kernel: BTRFS: device fsid 98f50efd-4872-4dd8-af35-5e494490b9aa devid 1 transid 34 /dev/mapper/usr (254:0) scanned by mount (944) Jan 20 06:46:15.766328 kernel: BTRFS info (device dm-0): first mount of filesystem 98f50efd-4872-4dd8-af35-5e494490b9aa Jan 20 06:46:15.766337 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 06:46:15.766347 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 20 06:46:15.766357 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 06:46:15.766367 kernel: BTRFS info (device dm-0): enabling free space tree Jan 20 06:46:15.766376 kernel: loop: module loaded Jan 20 06:46:15.766384 kernel: loop0: detected capacity change from 0 to 100552 Jan 20 06:46:15.766393 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 06:46:15.766405 systemd[1]: Successfully made /usr/ read-only. Jan 20 06:46:15.766422 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 06:46:15.766434 systemd[1]: Detected virtualization microsoft. Jan 20 06:46:15.766442 systemd[1]: Detected architecture x86-64. Jan 20 06:46:15.766453 systemd[1]: Running in initrd. Jan 20 06:46:15.766462 systemd[1]: No hostname configured, using default hostname. Jan 20 06:46:15.766471 systemd[1]: Hostname set to . Jan 20 06:46:15.766483 systemd[1]: Initializing machine ID from random generator. Jan 20 06:46:15.766493 systemd[1]: Queued start job for default target initrd.target. Jan 20 06:46:15.766503 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 06:46:15.766514 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 06:46:15.766523 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 06:46:15.766532 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 06:46:15.766541 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 06:46:15.766550 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 06:46:15.766559 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 06:46:15.766567 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 06:46:15.766577 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 06:46:15.766586 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 20 06:46:15.766595 systemd[1]: Reached target paths.target - Path Units. Jan 20 06:46:15.766606 systemd[1]: Reached target slices.target - Slice Units. Jan 20 06:46:15.766617 systemd[1]: Reached target swap.target - Swaps. Jan 20 06:46:15.766628 systemd[1]: Reached target timers.target - Timer Units. Jan 20 06:46:15.766646 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 06:46:15.766657 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 06:46:15.766667 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 06:46:15.766678 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 06:46:15.766686 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 20 06:46:15.766695 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 06:46:15.766704 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 06:46:15.766716 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 06:46:15.766726 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 06:46:15.766737 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 06:46:15.766745 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 06:46:15.766754 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 06:46:15.766763 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 06:46:15.766773 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 20 06:46:15.766787 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 06:46:15.766799 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 06:46:15.766808 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 06:46:15.766818 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 06:46:15.766831 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 06:46:15.766857 systemd-journald[1081]: Collecting audit messages is enabled. Jan 20 06:46:15.766882 kernel: audit: type=1130 audit(1768891575.760:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:15.766897 systemd-journald[1081]: Journal started Jan 20 06:46:15.766917 systemd-journald[1081]: Runtime Journal (/run/log/journal/ecc0184ff6dd4729a3891152f04edb06) is 8M, max 158.5M, 150.5M free. Jan 20 06:46:15.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:15.768225 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 06:46:15.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:15.775057 kernel: audit: type=1130 audit(1768891575.770:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:15.775091 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 06:46:15.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:15.780393 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 06:46:15.785170 kernel: audit: type=1130 audit(1768891575.778:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:15.786336 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 06:46:15.787335 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 06:46:15.797588 kernel: audit: type=1130 audit(1768891575.782:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:15.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:15.835455 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 06:46:15.859803 systemd-modules-load[1084]: Inserted module 'br_netfilter' Jan 20 06:46:15.860376 kernel: Bridge firewalling registered Jan 20 06:46:15.861462 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 06:46:15.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:15.862549 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 06:46:15.870505 kernel: audit: type=1130 audit(1768891575.860:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:15.905912 systemd-tmpfiles[1093]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 20 06:46:15.911311 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 06:46:15.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:15.919501 kernel: audit: type=1130 audit(1768891575.912:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:15.918052 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 06:46:15.934308 kernel: audit: type=1130 audit(1768891575.920:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:15.934329 kernel: audit: type=1130 audit(1768891575.923:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:15.934342 kernel: audit: type=1130 audit(1768891575.927:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:15.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:15.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:15.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:15.924553 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 06:46:15.935000 audit: BPF prog-id=6 op=LOAD Jan 20 06:46:15.928520 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 06:46:15.933290 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 06:46:15.937373 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 06:46:15.950322 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 06:46:15.960698 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 06:46:15.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:15.973095 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 06:46:15.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:15.978946 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 06:46:16.052638 dracut-cmdline[1121]: dracut-109 Jan 20 06:46:16.055514 dracut-cmdline[1121]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=a6870adf74cfcb2bcf8e795f60488409634fe2cf3647ef4cd59c8df5545d99c0 Jan 20 06:46:16.089143 systemd-resolved[1105]: Positive Trust Anchors: Jan 20 06:46:16.089151 systemd-resolved[1105]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 06:46:16.089154 systemd-resolved[1105]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 06:46:16.089187 systemd-resolved[1105]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 06:46:16.162769 systemd-resolved[1105]: Defaulting to hostname 'linux'. Jan 20 06:46:16.163604 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 06:46:16.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:16.169154 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 06:46:16.205222 kernel: Loading iSCSI transport class v2.0-870. Jan 20 06:46:16.268228 kernel: iscsi: registered transport (tcp) Jan 20 06:46:16.325506 kernel: iscsi: registered transport (qla4xxx) Jan 20 06:46:16.325544 kernel: QLogic iSCSI HBA Driver Jan 20 06:46:16.374728 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 06:46:16.391260 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 06:46:16.398344 kernel: kauditd_printk_skb: 4 callbacks suppressed Jan 20 06:46:16.398364 kernel: audit: type=1130 audit(1768891576.390:15): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:16.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:16.392539 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 06:46:16.427852 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 06:46:16.437062 kernel: audit: type=1130 audit(1768891576.426:16): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:16.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:16.433316 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 06:46:16.433843 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 06:46:16.458127 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 06:46:16.463241 kernel: audit: type=1130 audit(1768891576.457:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:16.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:16.464240 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 06:46:16.469906 kernel: audit: type=1334 audit(1768891576.462:18): prog-id=7 op=LOAD Jan 20 06:46:16.469925 kernel: audit: type=1334 audit(1768891576.462:19): prog-id=8 op=LOAD Jan 20 06:46:16.462000 audit: BPF prog-id=7 op=LOAD Jan 20 06:46:16.462000 audit: BPF prog-id=8 op=LOAD Jan 20 06:46:16.495580 systemd-udevd[1352]: Using default interface naming scheme 'v257'. Jan 20 06:46:16.506770 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 06:46:16.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:16.515229 kernel: audit: type=1130 audit(1768891576.510:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:16.517083 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 06:46:16.528539 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 06:46:16.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:16.534000 audit: BPF prog-id=9 op=LOAD Jan 20 06:46:16.536243 kernel: audit: type=1130 audit(1768891576.529:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:16.536267 kernel: audit: type=1334 audit(1768891576.534:22): prog-id=9 op=LOAD Jan 20 06:46:16.539331 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 06:46:16.544295 dracut-pre-trigger[1445]: rd.md=0: removing MD RAID activation Jan 20 06:46:16.563971 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 06:46:16.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:16.573718 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 06:46:16.581301 kernel: audit: type=1130 audit(1768891576.566:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:16.582798 systemd-networkd[1459]: lo: Link UP Jan 20 06:46:16.582804 systemd-networkd[1459]: lo: Gained carrier Jan 20 06:46:16.594175 kernel: audit: type=1130 audit(1768891576.585:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:16.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:16.583296 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 06:46:16.586380 systemd[1]: Reached target network.target - Network. Jan 20 06:46:16.620651 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 06:46:16.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:16.626076 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 06:46:16.689778 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 06:46:16.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:16.689911 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 06:46:16.690044 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 06:46:16.693620 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 06:46:16.708224 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 06:46:16.727453 kernel: hv_vmbus: registering driver hv_netvsc Jan 20 06:46:16.734902 kernel: hv_netvsc f8615163-0000-1000-2000-000d3a68cc7c (unnamed net_device) (uninitialized): VF slot 1 added Jan 20 06:46:16.749500 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 06:46:16.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:16.763179 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#97 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 20 06:46:16.761934 systemd-networkd[1459]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 06:46:16.761941 systemd-networkd[1459]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 06:46:16.762498 systemd-networkd[1459]: eth0: Link UP Jan 20 06:46:16.762613 systemd-networkd[1459]: eth0: Gained carrier Jan 20 06:46:16.772116 kernel: AES CTR mode by8 optimization enabled Jan 20 06:46:16.762622 systemd-networkd[1459]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 06:46:16.778778 systemd-networkd[1459]: eth0: DHCPv4 address 10.200.8.22/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 20 06:46:16.949227 kernel: nvme nvme0: using unchecked data buffer Jan 20 06:46:17.041503 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Jan 20 06:46:17.046320 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 06:46:17.155194 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jan 20 06:46:17.166411 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Jan 20 06:46:17.184888 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Jan 20 06:46:17.270533 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 06:46:17.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:17.273681 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 06:46:17.278263 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 06:46:17.281131 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 06:46:17.286807 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 06:46:17.331024 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 06:46:17.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:17.752151 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Jan 20 06:46:17.752343 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Jan 20 06:46:17.754792 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Jan 20 06:46:17.756267 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Jan 20 06:46:17.760292 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Jan 20 06:46:17.763299 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Jan 20 06:46:17.768494 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Jan 20 06:46:17.768563 kernel: pci 7870:00:00.0: enabling Extended Tags Jan 20 06:46:17.780756 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Jan 20 06:46:17.780913 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Jan 20 06:46:17.785235 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Jan 20 06:46:17.803791 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Jan 20 06:46:17.813227 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Jan 20 06:46:17.816279 kernel: hv_netvsc f8615163-0000-1000-2000-000d3a68cc7c eth0: VF registering: eth1 Jan 20 06:46:17.816432 kernel: mana 7870:00:00.0 eth1: joined to eth0 Jan 20 06:46:17.819813 systemd-networkd[1459]: eth1: Interface name change detected, renamed to enP30832s1. Jan 20 06:46:17.822335 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Jan 20 06:46:17.924220 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jan 20 06:46:17.928186 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 20 06:46:17.928414 kernel: hv_netvsc f8615163-0000-1000-2000-000d3a68cc7c eth0: Data path switched to VF: enP30832s1 Jan 20 06:46:17.928756 systemd-networkd[1459]: enP30832s1: Link UP Jan 20 06:46:17.929774 systemd-networkd[1459]: enP30832s1: Gained carrier Jan 20 06:46:17.993319 systemd-networkd[1459]: eth0: Gained IPv6LL Jan 20 06:46:18.342217 disk-uuid[1632]: Warning: The kernel is still using the old partition table. Jan 20 06:46:18.342217 disk-uuid[1632]: The new table will be used at the next reboot or after you Jan 20 06:46:18.342217 disk-uuid[1632]: run partprobe(8) or kpartx(8) Jan 20 06:46:18.342217 disk-uuid[1632]: The operation has completed successfully. Jan 20 06:46:18.350828 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 06:46:18.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:18.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:18.350911 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 06:46:18.353007 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 06:46:18.393227 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1679) Jan 20 06:46:18.393256 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 95d063cf-0d14-492f-8566-c80dea48b3c0 Jan 20 06:46:18.395269 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 20 06:46:18.417498 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 20 06:46:18.417530 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 20 06:46:18.417589 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 20 06:46:18.424225 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 95d063cf-0d14-492f-8566-c80dea48b3c0 Jan 20 06:46:18.424455 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 06:46:18.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:18.427532 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 06:46:19.527387 ignition[1698]: Ignition 2.24.0 Jan 20 06:46:19.527398 ignition[1698]: Stage: fetch-offline Jan 20 06:46:19.527492 ignition[1698]: no configs at "/usr/lib/ignition/base.d" Jan 20 06:46:19.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:19.529808 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 06:46:19.527499 ignition[1698]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 06:46:19.541609 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 20 06:46:19.527580 ignition[1698]: parsed url from cmdline: "" Jan 20 06:46:19.527583 ignition[1698]: no config URL provided Jan 20 06:46:19.527586 ignition[1698]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 06:46:19.527592 ignition[1698]: no config at "/usr/lib/ignition/user.ign" Jan 20 06:46:19.527596 ignition[1698]: failed to fetch config: resource requires networking Jan 20 06:46:19.528941 ignition[1698]: Ignition finished successfully Jan 20 06:46:19.559822 ignition[1712]: Ignition 2.24.0 Jan 20 06:46:19.559827 ignition[1712]: Stage: fetch Jan 20 06:46:19.560065 ignition[1712]: no configs at "/usr/lib/ignition/base.d" Jan 20 06:46:19.560072 ignition[1712]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 06:46:19.560149 ignition[1712]: parsed url from cmdline: "" Jan 20 06:46:19.560152 ignition[1712]: no config URL provided Jan 20 06:46:19.560156 ignition[1712]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 06:46:19.560161 ignition[1712]: no config at "/usr/lib/ignition/user.ign" Jan 20 06:46:19.560178 ignition[1712]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 20 06:46:19.630679 ignition[1712]: GET result: OK Jan 20 06:46:19.630770 ignition[1712]: config has been read from IMDS userdata Jan 20 06:46:19.630791 ignition[1712]: parsing config with SHA512: de23aa7d10195aecd43f6afa3485a74185369e59ab49c25f4b82029103178fb83b72f16f9d23347f80a4203a2fc9c3ac572541e24024a08f2b31839980d326c9 Jan 20 06:46:19.636865 unknown[1712]: fetched base config from "system" Jan 20 06:46:19.636976 unknown[1712]: fetched base config from "system" Jan 20 06:46:19.637375 ignition[1712]: fetch: fetch complete Jan 20 06:46:19.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:19.636981 unknown[1712]: fetched user config from "azure" Jan 20 06:46:19.637379 ignition[1712]: fetch: fetch passed Jan 20 06:46:19.639349 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 20 06:46:19.637413 ignition[1712]: Ignition finished successfully Jan 20 06:46:19.645324 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 06:46:19.668698 ignition[1719]: Ignition 2.24.0 Jan 20 06:46:19.668707 ignition[1719]: Stage: kargs Jan 20 06:46:19.668918 ignition[1719]: no configs at "/usr/lib/ignition/base.d" Jan 20 06:46:19.668925 ignition[1719]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 06:46:19.669679 ignition[1719]: kargs: kargs passed Jan 20 06:46:19.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:19.672231 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 06:46:19.669708 ignition[1719]: Ignition finished successfully Jan 20 06:46:19.678254 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 06:46:19.692034 ignition[1725]: Ignition 2.24.0 Jan 20 06:46:19.692043 ignition[1725]: Stage: disks Jan 20 06:46:19.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:19.693819 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 06:46:19.692240 ignition[1725]: no configs at "/usr/lib/ignition/base.d" Jan 20 06:46:19.697439 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 06:46:19.692247 ignition[1725]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 06:46:19.699341 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 06:46:19.692908 ignition[1725]: disks: disks passed Jan 20 06:46:19.702249 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 06:46:19.692935 ignition[1725]: Ignition finished successfully Jan 20 06:46:19.703201 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 06:46:19.703231 systemd[1]: Reached target basic.target - Basic System. Jan 20 06:46:19.704303 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 06:46:19.788613 systemd-fsck[1733]: ROOT: clean, 15/6361680 files, 408771/6359552 blocks Jan 20 06:46:19.792194 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 06:46:19.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:19.797833 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 06:46:20.125587 kernel: EXT4-fs (nvme0n1p9): mounted filesystem cccfbfd8-bb77-4a2f-9af9-c87f4957b904 r/w with ordered data mode. Quota mode: none. Jan 20 06:46:20.125526 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 06:46:20.128530 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 06:46:20.158899 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 06:46:20.164296 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 06:46:20.173071 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 20 06:46:20.175224 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 06:46:20.183287 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1742) Jan 20 06:46:20.175254 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 06:46:20.179588 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 06:46:20.186223 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 95d063cf-0d14-492f-8566-c80dea48b3c0 Jan 20 06:46:20.186263 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 20 06:46:20.192414 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 20 06:46:20.192486 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 20 06:46:20.192548 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 20 06:46:20.196154 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 06:46:20.201315 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 06:46:20.813390 coreos-metadata[1744]: Jan 20 06:46:20.813 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 20 06:46:20.815843 coreos-metadata[1744]: Jan 20 06:46:20.815 INFO Fetch successful Jan 20 06:46:20.817302 coreos-metadata[1744]: Jan 20 06:46:20.816 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 20 06:46:20.823414 coreos-metadata[1744]: Jan 20 06:46:20.823 INFO Fetch successful Jan 20 06:46:20.861732 coreos-metadata[1744]: Jan 20 06:46:20.861 INFO wrote hostname ci-4585.0.0-n-7cf3a16d5e to /sysroot/etc/hostname Jan 20 06:46:20.864321 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 20 06:46:20.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:22.057867 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 06:46:22.065885 kernel: kauditd_printk_skb: 14 callbacks suppressed Jan 20 06:46:22.065913 kernel: audit: type=1130 audit(1768891582.057:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:22.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:22.066077 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 06:46:22.072190 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 06:46:22.094935 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 06:46:22.098553 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 95d063cf-0d14-492f-8566-c80dea48b3c0 Jan 20 06:46:22.113460 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 06:46:22.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:22.121226 kernel: audit: type=1130 audit(1768891582.116:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:22.124559 ignition[1849]: INFO : Ignition 2.24.0 Jan 20 06:46:22.124559 ignition[1849]: INFO : Stage: mount Jan 20 06:46:22.130566 ignition[1849]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 06:46:22.130566 ignition[1849]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 06:46:22.130566 ignition[1849]: INFO : mount: mount passed Jan 20 06:46:22.130566 ignition[1849]: INFO : Ignition finished successfully Jan 20 06:46:22.142161 kernel: audit: type=1130 audit(1768891582.130:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:22.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:22.126803 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 06:46:22.135412 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 06:46:22.148297 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 06:46:22.165225 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1859) Jan 20 06:46:22.165255 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 95d063cf-0d14-492f-8566-c80dea48b3c0 Jan 20 06:46:22.167241 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 20 06:46:22.172656 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 20 06:46:22.172690 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 20 06:46:22.172702 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 20 06:46:22.175077 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 06:46:22.193980 ignition[1875]: INFO : Ignition 2.24.0 Jan 20 06:46:22.193980 ignition[1875]: INFO : Stage: files Jan 20 06:46:22.199242 ignition[1875]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 06:46:22.199242 ignition[1875]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 06:46:22.199242 ignition[1875]: DEBUG : files: compiled without relabeling support, skipping Jan 20 06:46:22.218140 ignition[1875]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 06:46:22.218140 ignition[1875]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 06:46:22.297042 ignition[1875]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 06:46:22.301318 ignition[1875]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 06:46:22.301318 ignition[1875]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 06:46:22.297436 unknown[1875]: wrote ssh authorized keys file for user: core Jan 20 06:46:22.316973 ignition[1875]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 20 06:46:22.319201 ignition[1875]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 20 06:46:22.357433 ignition[1875]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 06:46:22.412998 ignition[1875]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 20 06:46:22.415341 ignition[1875]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 20 06:46:22.415341 ignition[1875]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 06:46:22.415341 ignition[1875]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 06:46:22.415341 ignition[1875]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 06:46:22.415341 ignition[1875]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 06:46:22.415341 ignition[1875]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 06:46:22.415341 ignition[1875]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 06:46:22.415341 ignition[1875]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 06:46:22.433999 ignition[1875]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 06:46:22.433999 ignition[1875]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 06:46:22.433999 ignition[1875]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 06:46:22.433999 ignition[1875]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 06:46:22.433999 ignition[1875]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 06:46:22.433999 ignition[1875]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 20 06:46:22.973157 ignition[1875]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 20 06:46:24.139819 ignition[1875]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 06:46:24.139819 ignition[1875]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 20 06:46:24.186642 ignition[1875]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 06:46:24.195403 ignition[1875]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 06:46:24.195403 ignition[1875]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 20 06:46:24.195403 ignition[1875]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 20 06:46:24.195403 ignition[1875]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 06:46:24.195403 ignition[1875]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 06:46:24.195403 ignition[1875]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 06:46:24.195403 ignition[1875]: INFO : files: files passed Jan 20 06:46:24.195403 ignition[1875]: INFO : Ignition finished successfully Jan 20 06:46:24.223895 kernel: audit: type=1130 audit(1768891584.201:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.196597 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 06:46:24.204873 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 06:46:24.217331 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 06:46:24.238530 kernel: audit: type=1130 audit(1768891584.231:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.238555 kernel: audit: type=1131 audit(1768891584.231:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.228887 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 06:46:24.228958 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 06:46:24.251723 initrd-setup-root-after-ignition[1908]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 06:46:24.251723 initrd-setup-root-after-ignition[1908]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 06:46:24.258323 initrd-setup-root-after-ignition[1912]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 06:46:24.257510 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 06:46:24.268797 kernel: audit: type=1130 audit(1768891584.262:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.263433 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 06:46:24.269019 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 06:46:24.298604 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 06:46:24.298806 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 06:46:24.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.304415 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 06:46:24.319264 kernel: audit: type=1130 audit(1768891584.303:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.319280 kernel: audit: type=1131 audit(1768891584.303:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.311101 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 06:46:24.311426 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 06:46:24.313326 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 06:46:24.332244 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 06:46:24.335323 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 06:46:24.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.343236 kernel: audit: type=1130 audit(1768891584.331:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.354699 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 06:46:24.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.354879 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 06:46:24.357138 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 06:46:24.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.358882 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 06:46:24.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.359105 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 06:46:24.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.359235 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 06:46:24.359675 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 06:46:24.359938 systemd[1]: Stopped target basic.target - Basic System. Jan 20 06:46:24.360201 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 06:46:24.360469 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 06:46:24.360718 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 06:46:24.360888 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 20 06:46:24.361103 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 06:46:24.361377 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 06:46:24.361858 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 06:46:24.438310 ignition[1932]: INFO : Ignition 2.24.0 Jan 20 06:46:24.438310 ignition[1932]: INFO : Stage: umount Jan 20 06:46:24.438310 ignition[1932]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 06:46:24.438310 ignition[1932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 06:46:24.438310 ignition[1932]: INFO : umount: umount passed Jan 20 06:46:24.438310 ignition[1932]: INFO : Ignition finished successfully Jan 20 06:46:24.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.362086 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 06:46:24.362354 systemd[1]: Stopped target swap.target - Swaps. Jan 20 06:46:24.362571 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 06:46:24.362666 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 06:46:24.363095 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 06:46:24.363421 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 06:46:24.363651 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 06:46:24.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.365434 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 06:46:24.383556 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 06:46:24.383661 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 06:46:24.387135 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 06:46:24.387241 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 06:46:24.391380 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 06:46:24.391494 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 06:46:24.391913 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 20 06:46:24.391997 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 20 06:46:24.394391 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 06:46:24.397424 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 06:46:24.397475 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 06:46:24.397591 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 06:46:24.397859 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 06:46:24.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.397949 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 06:46:24.398314 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 06:46:24.399312 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 06:46:24.437497 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 06:46:24.437567 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 06:46:24.441073 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 06:46:24.441144 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 06:46:24.448618 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 06:46:24.448665 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 06:46:24.457991 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 06:46:24.458037 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 06:46:24.476955 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 20 06:46:24.476993 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 20 06:46:24.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.495314 systemd[1]: Stopped target network.target - Network. Jan 20 06:46:24.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.498260 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 06:46:24.498305 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 06:46:24.501275 systemd[1]: Stopped target paths.target - Path Units. Jan 20 06:46:24.501448 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 06:46:24.507648 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 06:46:24.513828 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 06:46:24.515580 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 06:46:24.518498 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 06:46:24.518538 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 06:46:24.521372 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 06:46:24.521402 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 06:46:24.524251 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 20 06:46:24.524280 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 20 06:46:24.528273 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 06:46:24.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.528319 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 06:46:24.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.532283 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 06:46:24.532322 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 06:46:24.536364 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 06:46:24.540290 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 06:46:24.545123 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 06:46:24.549821 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 06:46:24.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.549917 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 06:46:24.553247 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 06:46:24.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.558000 audit: BPF prog-id=6 op=UNLOAD Jan 20 06:46:24.553328 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 06:46:24.559957 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 20 06:46:24.560000 audit: BPF prog-id=9 op=UNLOAD Jan 20 06:46:24.563795 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 06:46:24.563834 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 06:46:24.567523 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 06:46:24.571086 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 06:46:24.571141 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 06:46:24.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.575414 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 06:46:24.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.575450 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 06:46:24.576875 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 06:46:24.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.576914 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 06:46:24.582529 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 06:46:24.596948 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 06:46:24.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.597077 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 06:46:24.598950 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 06:46:24.619068 kernel: hv_netvsc f8615163-0000-1000-2000-000d3a68cc7c eth0: Data path switched from VF: enP30832s1 Jan 20 06:46:24.621343 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 20 06:46:24.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.599010 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 06:46:24.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.599222 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 06:46:24.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.599250 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 06:46:24.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.599410 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 06:46:24.599448 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 06:46:24.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.600016 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 06:46:24.600046 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 06:46:24.600302 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 06:46:24.600333 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 06:46:24.607365 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 06:46:24.615280 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 20 06:46:24.615335 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 06:46:24.619074 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 06:46:24.619122 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 06:46:24.621286 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 20 06:46:24.621440 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 06:46:24.625307 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 06:46:24.625354 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 06:46:24.627982 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 06:46:24.628012 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 06:46:24.632886 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 06:46:24.632947 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 06:46:24.636453 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 06:46:24.636513 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 06:46:24.940124 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 06:46:24.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.941145 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 06:46:24.943748 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 06:46:24.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:24.947271 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 06:46:24.947321 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 06:46:24.951308 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 06:46:25.000584 systemd[1]: Switching root. Jan 20 06:46:25.078360 systemd-journald[1081]: Journal stopped Jan 20 06:46:29.755124 systemd-journald[1081]: Received SIGTERM from PID 1 (systemd). Jan 20 06:46:29.755142 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 06:46:29.755152 kernel: SELinux: policy capability open_perms=1 Jan 20 06:46:29.755159 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 06:46:29.755166 kernel: SELinux: policy capability always_check_network=0 Jan 20 06:46:29.755173 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 06:46:29.755180 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 06:46:29.755186 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 06:46:29.755194 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 06:46:29.755200 kernel: SELinux: policy capability userspace_initial_context=0 Jan 20 06:46:29.755216 systemd[1]: Successfully loaded SELinux policy in 160.366ms. Jan 20 06:46:29.755224 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 3.663ms. Jan 20 06:46:29.755231 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 06:46:29.755240 systemd[1]: Detected virtualization microsoft. Jan 20 06:46:29.755247 systemd[1]: Detected architecture x86-64. Jan 20 06:46:29.755254 systemd[1]: Detected first boot. Jan 20 06:46:29.755261 systemd[1]: Hostname set to . Jan 20 06:46:29.755269 systemd[1]: Initializing machine ID from random generator. Jan 20 06:46:29.755276 zram_generator::config[1976]: No configuration found. Jan 20 06:46:29.755284 kernel: Guest personality initialized and is inactive Jan 20 06:46:29.755290 kernel: VMCI host device registered (name=vmci, major=10, minor=259) Jan 20 06:46:29.755296 kernel: Initialized host personality Jan 20 06:46:29.755302 kernel: NET: Registered PF_VSOCK protocol family Jan 20 06:46:29.755308 systemd[1]: Populated /etc with preset unit settings. Jan 20 06:46:29.755315 kernel: kauditd_printk_skb: 45 callbacks suppressed Jan 20 06:46:29.755320 kernel: audit: type=1334 audit(1768891589.114:94): prog-id=12 op=LOAD Jan 20 06:46:29.755326 kernel: audit: type=1334 audit(1768891589.114:95): prog-id=3 op=UNLOAD Jan 20 06:46:29.755332 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 06:46:29.755337 kernel: audit: type=1334 audit(1768891589.114:96): prog-id=13 op=LOAD Jan 20 06:46:29.755342 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 06:46:29.755349 kernel: audit: type=1334 audit(1768891589.114:97): prog-id=14 op=LOAD Jan 20 06:46:29.755355 kernel: audit: type=1334 audit(1768891589.114:98): prog-id=4 op=UNLOAD Jan 20 06:46:29.755360 kernel: audit: type=1334 audit(1768891589.114:99): prog-id=5 op=UNLOAD Jan 20 06:46:29.755365 kernel: audit: type=1131 audit(1768891589.114:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:29.755371 kernel: audit: type=1334 audit(1768891589.124:101): prog-id=12 op=UNLOAD Jan 20 06:46:29.755377 kernel: audit: type=1130 audit(1768891589.131:102): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:29.755383 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 06:46:29.755389 kernel: audit: type=1131 audit(1768891589.131:103): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:29.755397 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 06:46:29.755403 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 06:46:29.755411 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 06:46:29.755417 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 06:46:29.755424 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 06:46:29.755430 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 06:46:29.755436 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 06:46:29.755442 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 06:46:29.755448 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 06:46:29.755454 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 06:46:29.755461 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 06:46:29.755467 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 06:46:29.755473 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 06:46:29.755479 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 06:46:29.755485 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 06:46:29.755491 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 06:46:29.755498 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 06:46:29.755504 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 06:46:29.755510 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 06:46:29.755516 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 06:46:29.755522 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 06:46:29.755528 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 06:46:29.755534 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 06:46:29.755540 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 20 06:46:29.755551 systemd[1]: Reached target slices.target - Slice Units. Jan 20 06:46:29.755557 systemd[1]: Reached target swap.target - Swaps. Jan 20 06:46:29.755563 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 06:46:29.755569 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 06:46:29.755576 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 20 06:46:29.755582 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 06:46:29.755588 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 20 06:46:29.755594 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 06:46:29.755599 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 20 06:46:29.755606 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 20 06:46:29.755612 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 06:46:29.755618 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 06:46:29.755624 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 06:46:29.755630 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 06:46:29.755636 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 06:46:29.755642 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 06:46:29.755649 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 06:46:29.755655 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 06:46:29.755661 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 06:46:29.755667 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 06:46:29.755673 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 06:46:29.755679 systemd[1]: Reached target machines.target - Containers. Jan 20 06:46:29.755685 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 06:46:29.755692 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 06:46:29.755698 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 06:46:29.755704 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 06:46:29.755710 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 06:46:29.755716 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 06:46:29.755722 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 06:46:29.755729 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 06:46:29.755735 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 06:46:29.755741 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 06:46:29.755747 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 06:46:29.755753 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 06:46:29.755758 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 06:46:29.755764 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 06:46:29.755772 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 06:46:29.755778 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 06:46:29.755783 kernel: fuse: init (API version 7.41) Jan 20 06:46:29.755789 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 06:46:29.755795 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 06:46:29.755801 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 06:46:29.755807 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 20 06:46:29.755814 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 06:46:29.755820 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 06:46:29.755826 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 06:46:29.755832 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 06:46:29.755838 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 06:46:29.755844 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 06:46:29.755851 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 06:46:29.755857 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 06:46:29.755863 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 06:46:29.755869 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 06:46:29.755875 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 06:46:29.755880 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 06:46:29.755886 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 06:46:29.755896 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 06:46:29.755902 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 06:46:29.755909 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 06:46:29.755915 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 06:46:29.755921 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 06:46:29.755927 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 06:46:29.755933 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 06:46:29.755939 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 06:46:29.755946 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 06:46:29.755952 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 06:46:29.755959 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 20 06:46:29.755965 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 06:46:29.755971 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 06:46:29.755979 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 06:46:29.755985 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 06:46:29.755991 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 20 06:46:29.755997 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 06:46:29.756003 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 06:46:29.756010 kernel: ACPI: bus type drm_connector registered Jan 20 06:46:29.756023 systemd-journald[2059]: Collecting audit messages is enabled. Jan 20 06:46:29.756037 systemd-journald[2059]: Journal started Jan 20 06:46:29.756053 systemd-journald[2059]: Runtime Journal (/run/log/journal/d08a5da9d2224d5ab2ab17e5947afa4d) is 8M, max 158.5M, 150.5M free. Jan 20 06:46:29.272000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 20 06:46:29.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:29.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:29.512000 audit: BPF prog-id=14 op=UNLOAD Jan 20 06:46:29.512000 audit: BPF prog-id=13 op=UNLOAD Jan 20 06:46:29.513000 audit: BPF prog-id=15 op=LOAD Jan 20 06:46:29.513000 audit: BPF prog-id=16 op=LOAD Jan 20 06:46:29.513000 audit: BPF prog-id=17 op=LOAD Jan 20 06:46:29.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:29.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:29.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:29.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:29.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:29.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:29.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:29.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:29.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:29.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:29.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:29.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:29.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:29.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:29.752000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 20 06:46:29.752000 audit[2059]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fff0d6dc340 a2=4000 a3=0 items=0 ppid=1 pid=2059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:46:29.752000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 20 06:46:29.107569 systemd[1]: Queued start job for default target multi-user.target. Jan 20 06:46:29.115637 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 20 06:46:29.115941 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 06:46:29.774298 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 06:46:29.783230 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 06:46:29.794104 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 06:46:29.798226 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 06:46:29.808532 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 06:46:29.814233 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 06:46:29.821254 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 06:46:29.825258 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 06:46:29.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:29.830146 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 06:46:29.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:29.833888 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 06:46:29.834044 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 06:46:29.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:29.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:29.835633 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 20 06:46:29.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:29.838483 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 06:46:29.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:29.841752 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 06:46:29.843267 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 06:46:29.846440 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 06:46:29.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:29.855756 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 06:46:29.861337 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 06:46:29.864313 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 20 06:46:29.872226 kernel: loop1: detected capacity change from 0 to 224512 Jan 20 06:46:29.894963 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 06:46:29.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:29.901039 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 20 06:46:29.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:29.904785 systemd-journald[2059]: Time spent on flushing to /var/log/journal/d08a5da9d2224d5ab2ab17e5947afa4d is 14.673ms for 1145 entries. Jan 20 06:46:29.904785 systemd-journald[2059]: System Journal (/var/log/journal/d08a5da9d2224d5ab2ab17e5947afa4d) is 8M, max 2.2G, 2.2G free. Jan 20 06:46:29.941058 systemd-journald[2059]: Received client request to flush runtime journal. Jan 20 06:46:29.941092 kernel: loop2: detected capacity change from 0 to 50784 Jan 20 06:46:29.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:29.915054 systemd-tmpfiles[2101]: ACLs are not supported, ignoring. Jan 20 06:46:29.915066 systemd-tmpfiles[2101]: ACLs are not supported, ignoring. Jan 20 06:46:29.917875 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 06:46:29.921375 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 06:46:29.941863 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 06:46:29.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:30.020794 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 06:46:30.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:30.021000 audit: BPF prog-id=18 op=LOAD Jan 20 06:46:30.021000 audit: BPF prog-id=19 op=LOAD Jan 20 06:46:30.021000 audit: BPF prog-id=20 op=LOAD Jan 20 06:46:30.023319 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 20 06:46:30.024000 audit: BPF prog-id=21 op=LOAD Jan 20 06:46:30.028320 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 06:46:30.032175 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 06:46:30.036000 audit: BPF prog-id=22 op=LOAD Jan 20 06:46:30.036000 audit: BPF prog-id=23 op=LOAD Jan 20 06:46:30.036000 audit: BPF prog-id=24 op=LOAD Jan 20 06:46:30.038259 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 20 06:46:30.039000 audit: BPF prog-id=25 op=LOAD Jan 20 06:46:30.040000 audit: BPF prog-id=26 op=LOAD Jan 20 06:46:30.040000 audit: BPF prog-id=27 op=LOAD Jan 20 06:46:30.042354 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 06:46:30.052140 systemd-tmpfiles[2141]: ACLs are not supported, ignoring. Jan 20 06:46:30.052160 systemd-tmpfiles[2141]: ACLs are not supported, ignoring. Jan 20 06:46:30.054880 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 06:46:30.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:30.097703 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 06:46:30.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:30.103417 systemd-nsresourced[2142]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 20 06:46:30.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:30.107252 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 20 06:46:30.120094 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 06:46:30.168262 systemd-oomd[2139]: No swap; memory pressure usage will be degraded Jan 20 06:46:30.168889 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 20 06:46:30.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:30.213985 systemd-resolved[2140]: Positive Trust Anchors: Jan 20 06:46:30.214181 systemd-resolved[2140]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 06:46:30.214241 systemd-resolved[2140]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 06:46:30.214300 systemd-resolved[2140]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 06:46:30.362347 systemd-resolved[2140]: Using system hostname 'ci-4585.0.0-n-7cf3a16d5e'. Jan 20 06:46:30.363153 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 06:46:30.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:30.366347 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 06:46:30.386368 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 06:46:30.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:30.388000 audit: BPF prog-id=8 op=UNLOAD Jan 20 06:46:30.388000 audit: BPF prog-id=7 op=UNLOAD Jan 20 06:46:30.388000 audit: BPF prog-id=28 op=LOAD Jan 20 06:46:30.388000 audit: BPF prog-id=29 op=LOAD Jan 20 06:46:30.390276 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 06:46:30.414575 systemd-udevd[2162]: Using default interface naming scheme 'v257'. Jan 20 06:46:30.420259 kernel: loop3: detected capacity change from 0 to 111560 Jan 20 06:46:30.547628 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 06:46:30.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:30.550000 audit: BPF prog-id=30 op=LOAD Jan 20 06:46:30.553202 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 06:46:30.607783 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 20 06:46:30.628240 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 06:46:30.646412 kernel: hv_vmbus: registering driver hv_balloon Jan 20 06:46:30.650247 kernel: hv_vmbus: registering driver hyperv_fb Jan 20 06:46:30.654674 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 20 06:46:30.654785 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 20 06:46:30.656786 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 20 06:46:30.657221 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#251 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 20 06:46:30.659743 kernel: Console: switching to colour dummy device 80x25 Jan 20 06:46:30.665127 kernel: Console: switching to colour frame buffer device 128x48 Jan 20 06:46:30.724964 systemd-networkd[2169]: lo: Link UP Jan 20 06:46:30.724970 systemd-networkd[2169]: lo: Gained carrier Jan 20 06:46:30.726187 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 06:46:30.728556 systemd[1]: Reached target network.target - Network. Jan 20 06:46:30.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:30.731082 systemd-networkd[2169]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 06:46:30.731087 systemd-networkd[2169]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 06:46:30.734224 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jan 20 06:46:30.731708 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 20 06:46:30.738392 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 06:46:30.745234 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 20 06:46:30.753715 kernel: hv_netvsc f8615163-0000-1000-2000-000d3a68cc7c eth0: Data path switched to VF: enP30832s1 Jan 20 06:46:30.751637 systemd-networkd[2169]: enP30832s1: Link UP Jan 20 06:46:30.751718 systemd-networkd[2169]: eth0: Link UP Jan 20 06:46:30.751721 systemd-networkd[2169]: eth0: Gained carrier Jan 20 06:46:30.751733 systemd-networkd[2169]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 06:46:30.752015 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 06:46:30.760534 systemd-networkd[2169]: enP30832s1: Gained carrier Jan 20 06:46:30.773443 systemd-networkd[2169]: eth0: DHCPv4 address 10.200.8.22/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 20 06:46:30.782622 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 06:46:30.782815 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 06:46:30.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:30.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:30.787422 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 06:46:30.810000 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 20 06:46:30.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:30.831509 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 06:46:30.831683 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 06:46:30.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:30.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:30.837063 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 06:46:30.878224 kernel: loop4: detected capacity change from 0 to 27728 Jan 20 06:46:30.920063 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jan 20 06:46:30.923817 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 06:46:30.971225 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jan 20 06:46:30.988546 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 06:46:30.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:31.146922 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 06:46:31.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:31.327230 kernel: loop5: detected capacity change from 0 to 224512 Jan 20 06:46:31.337225 kernel: loop6: detected capacity change from 0 to 50784 Jan 20 06:46:31.346235 kernel: loop7: detected capacity change from 0 to 111560 Jan 20 06:46:31.357236 kernel: loop1: detected capacity change from 0 to 27728 Jan 20 06:46:31.363065 (sd-merge)[2255]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-azure.raw'. Jan 20 06:46:31.365456 (sd-merge)[2255]: Merged extensions into '/usr'. Jan 20 06:46:31.368307 systemd[1]: Reload requested from client PID 2099 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 06:46:31.368318 systemd[1]: Reloading... Jan 20 06:46:31.411235 zram_generator::config[2288]: No configuration found. Jan 20 06:46:31.594228 systemd[1]: Reloading finished in 225 ms. Jan 20 06:46:31.623720 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 06:46:31.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:31.640886 systemd[1]: Starting ensure-sysext.service... Jan 20 06:46:31.645334 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 06:46:31.653000 audit: BPF prog-id=31 op=LOAD Jan 20 06:46:31.653000 audit: BPF prog-id=18 op=UNLOAD Jan 20 06:46:31.653000 audit: BPF prog-id=32 op=LOAD Jan 20 06:46:31.653000 audit: BPF prog-id=33 op=LOAD Jan 20 06:46:31.653000 audit: BPF prog-id=19 op=UNLOAD Jan 20 06:46:31.653000 audit: BPF prog-id=20 op=UNLOAD Jan 20 06:46:31.654000 audit: BPF prog-id=34 op=LOAD Jan 20 06:46:31.654000 audit: BPF prog-id=21 op=UNLOAD Jan 20 06:46:31.655000 audit: BPF prog-id=35 op=LOAD Jan 20 06:46:31.655000 audit: BPF prog-id=22 op=UNLOAD Jan 20 06:46:31.655000 audit: BPF prog-id=36 op=LOAD Jan 20 06:46:31.655000 audit: BPF prog-id=37 op=LOAD Jan 20 06:46:31.655000 audit: BPF prog-id=23 op=UNLOAD Jan 20 06:46:31.655000 audit: BPF prog-id=24 op=UNLOAD Jan 20 06:46:31.656000 audit: BPF prog-id=38 op=LOAD Jan 20 06:46:31.656000 audit: BPF prog-id=15 op=UNLOAD Jan 20 06:46:31.656000 audit: BPF prog-id=39 op=LOAD Jan 20 06:46:31.656000 audit: BPF prog-id=40 op=LOAD Jan 20 06:46:31.656000 audit: BPF prog-id=16 op=UNLOAD Jan 20 06:46:31.656000 audit: BPF prog-id=17 op=UNLOAD Jan 20 06:46:31.656000 audit: BPF prog-id=41 op=LOAD Jan 20 06:46:31.656000 audit: BPF prog-id=42 op=LOAD Jan 20 06:46:31.656000 audit: BPF prog-id=28 op=UNLOAD Jan 20 06:46:31.656000 audit: BPF prog-id=29 op=UNLOAD Jan 20 06:46:31.657000 audit: BPF prog-id=43 op=LOAD Jan 20 06:46:31.657000 audit: BPF prog-id=30 op=UNLOAD Jan 20 06:46:31.657000 audit: BPF prog-id=44 op=LOAD Jan 20 06:46:31.657000 audit: BPF prog-id=25 op=UNLOAD Jan 20 06:46:31.657000 audit: BPF prog-id=45 op=LOAD Jan 20 06:46:31.657000 audit: BPF prog-id=46 op=LOAD Jan 20 06:46:31.657000 audit: BPF prog-id=26 op=UNLOAD Jan 20 06:46:31.657000 audit: BPF prog-id=27 op=UNLOAD Jan 20 06:46:31.663247 systemd-tmpfiles[2347]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 06:46:31.663272 systemd-tmpfiles[2347]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 06:46:31.663700 systemd-tmpfiles[2347]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 06:46:31.664311 systemd[1]: Reload requested from client PID 2346 ('systemctl') (unit ensure-sysext.service)... Jan 20 06:46:31.664371 systemd[1]: Reloading... Jan 20 06:46:31.664599 systemd-tmpfiles[2347]: ACLs are not supported, ignoring. Jan 20 06:46:31.664647 systemd-tmpfiles[2347]: ACLs are not supported, ignoring. Jan 20 06:46:31.682760 systemd-tmpfiles[2347]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 06:46:31.682771 systemd-tmpfiles[2347]: Skipping /boot Jan 20 06:46:31.686762 systemd-tmpfiles[2347]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 06:46:31.686767 systemd-tmpfiles[2347]: Skipping /boot Jan 20 06:46:31.720244 zram_generator::config[2381]: No configuration found. Jan 20 06:46:31.878624 systemd[1]: Reloading finished in 213 ms. Jan 20 06:46:31.889000 audit: BPF prog-id=47 op=LOAD Jan 20 06:46:31.889000 audit: BPF prog-id=31 op=UNLOAD Jan 20 06:46:31.889000 audit: BPF prog-id=48 op=LOAD Jan 20 06:46:31.889000 audit: BPF prog-id=49 op=LOAD Jan 20 06:46:31.889000 audit: BPF prog-id=32 op=UNLOAD Jan 20 06:46:31.889000 audit: BPF prog-id=33 op=UNLOAD Jan 20 06:46:31.890000 audit: BPF prog-id=50 op=LOAD Jan 20 06:46:31.890000 audit: BPF prog-id=35 op=UNLOAD Jan 20 06:46:31.890000 audit: BPF prog-id=51 op=LOAD Jan 20 06:46:31.890000 audit: BPF prog-id=52 op=LOAD Jan 20 06:46:31.890000 audit: BPF prog-id=36 op=UNLOAD Jan 20 06:46:31.890000 audit: BPF prog-id=37 op=UNLOAD Jan 20 06:46:31.890000 audit: BPF prog-id=53 op=LOAD Jan 20 06:46:31.890000 audit: BPF prog-id=44 op=UNLOAD Jan 20 06:46:31.890000 audit: BPF prog-id=54 op=LOAD Jan 20 06:46:31.890000 audit: BPF prog-id=55 op=LOAD Jan 20 06:46:31.890000 audit: BPF prog-id=45 op=UNLOAD Jan 20 06:46:31.890000 audit: BPF prog-id=46 op=UNLOAD Jan 20 06:46:31.891000 audit: BPF prog-id=56 op=LOAD Jan 20 06:46:31.891000 audit: BPF prog-id=57 op=LOAD Jan 20 06:46:31.891000 audit: BPF prog-id=41 op=UNLOAD Jan 20 06:46:31.891000 audit: BPF prog-id=42 op=UNLOAD Jan 20 06:46:31.891000 audit: BPF prog-id=58 op=LOAD Jan 20 06:46:31.891000 audit: BPF prog-id=34 op=UNLOAD Jan 20 06:46:31.892000 audit: BPF prog-id=59 op=LOAD Jan 20 06:46:31.896000 audit: BPF prog-id=43 op=UNLOAD Jan 20 06:46:31.896000 audit: BPF prog-id=60 op=LOAD Jan 20 06:46:31.896000 audit: BPF prog-id=38 op=UNLOAD Jan 20 06:46:31.896000 audit: BPF prog-id=61 op=LOAD Jan 20 06:46:31.896000 audit: BPF prog-id=62 op=LOAD Jan 20 06:46:31.896000 audit: BPF prog-id=39 op=UNLOAD Jan 20 06:46:31.896000 audit: BPF prog-id=40 op=UNLOAD Jan 20 06:46:31.900103 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 06:46:31.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:31.908955 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 06:46:31.909820 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 06:46:31.913441 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 06:46:31.915072 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 06:46:31.918303 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 06:46:31.923399 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 06:46:31.926334 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 06:46:31.928592 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 06:46:31.928811 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 06:46:31.930391 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 06:46:31.932424 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 06:46:31.935897 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 06:46:31.939188 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 06:46:31.947105 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 06:46:31.949142 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 06:46:31.951670 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 06:46:31.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:31.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:31.954402 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 06:46:31.954557 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 06:46:31.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:31.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:31.957095 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 06:46:31.957284 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 06:46:31.956000 audit[2449]: SYSTEM_BOOT pid=2449 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 20 06:46:31.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:31.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:31.965845 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 06:46:31.966012 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 06:46:31.967033 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 06:46:31.970295 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 06:46:31.976275 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 06:46:31.978230 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 06:46:31.978444 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 06:46:31.978567 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 06:46:31.978683 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 06:46:31.981005 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 06:46:31.981238 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 06:46:31.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:31.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:31.983297 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 06:46:31.983430 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 06:46:31.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:31.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:31.985255 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 06:46:31.987380 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 06:46:31.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:31.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:31.992735 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 06:46:31.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:31.997355 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 06:46:31.997567 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 06:46:32.000293 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 06:46:32.002917 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 06:46:32.006293 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 06:46:32.012451 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 06:46:32.015332 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 06:46:32.015468 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 06:46:32.015548 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 06:46:32.015656 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 06:46:32.017847 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 06:46:32.019013 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 06:46:32.019374 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 06:46:32.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:32.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:32.021769 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 06:46:32.021910 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 06:46:32.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:32.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:32.023913 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 06:46:32.024058 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 06:46:32.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:32.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:32.027751 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 06:46:32.027902 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 06:46:32.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:32.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:32.030906 systemd[1]: Finished ensure-sysext.service. Jan 20 06:46:32.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:32.034992 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 06:46:32.035085 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 06:46:32.156557 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 06:46:32.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:46:32.316000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 20 06:46:32.316000 audit[2488]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd059deb50 a2=420 a3=0 items=0 ppid=2441 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:46:32.318277 augenrules[2488]: No rules Jan 20 06:46:32.316000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 06:46:32.318923 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 06:46:32.319077 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 06:46:32.735435 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 06:46:32.737047 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 06:46:32.777310 systemd-networkd[2169]: eth0: Gained IPv6LL Jan 20 06:46:32.778819 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 06:46:32.781492 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 06:46:38.036317 ldconfig[2446]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 06:46:38.047006 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 06:46:38.049666 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 06:46:38.065821 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 06:46:38.068627 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 06:46:38.072357 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 06:46:38.075290 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 06:46:38.076704 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 20 06:46:38.079357 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 06:46:38.082325 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 06:46:38.083972 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 20 06:46:38.087323 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 20 06:46:38.088848 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 06:46:38.090582 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 06:46:38.090613 systemd[1]: Reached target paths.target - Path Units. Jan 20 06:46:38.091826 systemd[1]: Reached target timers.target - Timer Units. Jan 20 06:46:38.093971 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 06:46:38.098046 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 06:46:38.100787 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 20 06:46:38.104368 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 20 06:46:38.107294 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 20 06:46:38.117616 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 06:46:38.121447 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 20 06:46:38.123501 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 06:46:38.125554 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 06:46:38.127097 systemd[1]: Reached target basic.target - Basic System. Jan 20 06:46:38.128473 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 06:46:38.128495 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 06:46:38.157931 systemd[1]: Starting chronyd.service - NTP client/server... Jan 20 06:46:38.159876 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 06:46:38.165332 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 20 06:46:38.173294 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 06:46:38.176698 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 06:46:38.180198 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 06:46:38.187305 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 06:46:38.189612 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 06:46:38.194277 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 20 06:46:38.196538 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Jan 20 06:46:38.198450 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 20 06:46:38.200299 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 20 06:46:38.203303 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:46:38.204085 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 06:46:38.207406 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 06:46:38.215943 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 06:46:38.219346 jq[2509]: false Jan 20 06:46:38.224448 KVP[2512]: KVP starting; pid is:2512 Jan 20 06:46:38.228739 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 06:46:38.233173 KVP[2512]: KVP LIC Version: 3.1 Jan 20 06:46:38.233272 kernel: hv_utils: KVP IC version 4.0 Jan 20 06:46:38.235361 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 06:46:38.241517 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 06:46:38.244316 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 06:46:38.244975 google_oslogin_nss_cache[2511]: oslogin_cache_refresh[2511]: Refreshing passwd entry cache Jan 20 06:46:38.245156 oslogin_cache_refresh[2511]: Refreshing passwd entry cache Jan 20 06:46:38.245733 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 06:46:38.252816 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 06:46:38.257450 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 06:46:38.261070 extend-filesystems[2510]: Found /dev/nvme0n1p6 Jan 20 06:46:38.267672 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 06:46:38.271580 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 06:46:38.271765 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 06:46:38.273467 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 06:46:38.273650 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 06:46:38.276920 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 06:46:38.277102 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 06:46:38.280222 jq[2532]: true Jan 20 06:46:38.284572 google_oslogin_nss_cache[2511]: oslogin_cache_refresh[2511]: Failure getting users, quitting Jan 20 06:46:38.284572 google_oslogin_nss_cache[2511]: oslogin_cache_refresh[2511]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 06:46:38.284572 google_oslogin_nss_cache[2511]: oslogin_cache_refresh[2511]: Refreshing group entry cache Jan 20 06:46:38.282900 oslogin_cache_refresh[2511]: Failure getting users, quitting Jan 20 06:46:38.282914 oslogin_cache_refresh[2511]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 06:46:38.282947 oslogin_cache_refresh[2511]: Refreshing group entry cache Jan 20 06:46:38.293328 extend-filesystems[2510]: Found /dev/nvme0n1p9 Jan 20 06:46:38.301750 chronyd[2501]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 20 06:46:38.302083 extend-filesystems[2510]: Checking size of /dev/nvme0n1p9 Jan 20 06:46:38.309072 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 06:46:38.319425 jq[2544]: true Jan 20 06:46:38.324785 chronyd[2501]: Timezone right/UTC failed leap second check, ignoring Jan 20 06:46:38.327853 google_oslogin_nss_cache[2511]: oslogin_cache_refresh[2511]: Failure getting groups, quitting Jan 20 06:46:38.327853 google_oslogin_nss_cache[2511]: oslogin_cache_refresh[2511]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 06:46:38.325046 systemd[1]: Started chronyd.service - NTP client/server. Jan 20 06:46:38.324918 chronyd[2501]: Loaded seccomp filter (level 2) Jan 20 06:46:38.325341 oslogin_cache_refresh[2511]: Failure getting groups, quitting Jan 20 06:46:38.325350 oslogin_cache_refresh[2511]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 06:46:38.330480 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 20 06:46:38.333651 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 20 06:46:38.339365 update_engine[2526]: I20260120 06:46:38.337891 2526 main.cc:92] Flatcar Update Engine starting Jan 20 06:46:38.355316 extend-filesystems[2510]: Resized partition /dev/nvme0n1p9 Jan 20 06:46:38.364703 tar[2542]: linux-amd64/LICENSE Jan 20 06:46:38.364703 tar[2542]: linux-amd64/helm Jan 20 06:46:38.378505 extend-filesystems[2581]: resize2fs 1.47.3 (8-Jul-2025) Jan 20 06:46:38.385736 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 6359552 to 6376955 blocks Jan 20 06:46:38.389233 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 6376955 Jan 20 06:46:38.402055 systemd-logind[2525]: New seat seat0. Jan 20 06:46:38.451450 systemd-logind[2525]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 20 06:46:38.451800 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 06:46:38.477290 extend-filesystems[2581]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 20 06:46:38.477290 extend-filesystems[2581]: old_desc_blocks = 4, new_desc_blocks = 4 Jan 20 06:46:38.477290 extend-filesystems[2581]: The filesystem on /dev/nvme0n1p9 is now 6376955 (4k) blocks long. Jan 20 06:46:38.472965 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 06:46:38.488530 extend-filesystems[2510]: Resized filesystem in /dev/nvme0n1p9 Jan 20 06:46:38.473191 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 06:46:38.495121 bash[2585]: Updated "/home/core/.ssh/authorized_keys" Jan 20 06:46:38.495448 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 06:46:38.498593 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 06:46:38.532832 dbus-daemon[2504]: [system] SELinux support is enabled Jan 20 06:46:38.533546 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 06:46:38.539836 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 06:46:38.539928 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 06:46:38.542847 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 06:46:38.543247 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 06:46:38.556492 dbus-daemon[2504]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 20 06:46:38.567736 update_engine[2526]: I20260120 06:46:38.560633 2526 update_check_scheduler.cc:74] Next update check in 7m25s Jan 20 06:46:38.561516 systemd[1]: Started update-engine.service - Update Engine. Jan 20 06:46:38.565435 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 06:46:38.684276 coreos-metadata[2503]: Jan 20 06:46:38.683 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 20 06:46:38.687953 coreos-metadata[2503]: Jan 20 06:46:38.687 INFO Fetch successful Jan 20 06:46:38.688020 coreos-metadata[2503]: Jan 20 06:46:38.687 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 20 06:46:38.695774 coreos-metadata[2503]: Jan 20 06:46:38.695 INFO Fetch successful Jan 20 06:46:38.695774 coreos-metadata[2503]: Jan 20 06:46:38.695 INFO Fetching http://168.63.129.16/machine/1e29f0af-b15d-4752-a57b-82cacdc9f6dd/fdc92fa5%2D8756%2D46e6%2D80d1%2Dac8e19c9bb88.%5Fci%2D4585.0.0%2Dn%2D7cf3a16d5e?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 20 06:46:38.696953 coreos-metadata[2503]: Jan 20 06:46:38.696 INFO Fetch successful Jan 20 06:46:38.696953 coreos-metadata[2503]: Jan 20 06:46:38.696 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 20 06:46:38.707783 coreos-metadata[2503]: Jan 20 06:46:38.706 INFO Fetch successful Jan 20 06:46:38.756835 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 20 06:46:38.760644 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 06:46:38.864413 sshd_keygen[2545]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 06:46:38.888525 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 06:46:38.894375 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 06:46:38.901369 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 20 06:46:38.921056 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 06:46:38.921233 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 06:46:38.926430 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 06:46:38.952354 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 20 06:46:38.957542 locksmithd[2607]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 06:46:38.959900 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 06:46:38.964943 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 06:46:38.968658 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 06:46:38.971022 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 06:46:39.049883 tar[2542]: linux-amd64/README.md Jan 20 06:46:39.066234 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 06:46:39.470934 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:46:39.485524 (kubelet)[2656]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 06:46:39.607700 containerd[2553]: time="2026-01-20T06:46:39Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 20 06:46:39.608325 containerd[2553]: time="2026-01-20T06:46:39.608294953Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 20 06:46:39.618226 containerd[2553]: time="2026-01-20T06:46:39.618048705Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.558µs" Jan 20 06:46:39.618226 containerd[2553]: time="2026-01-20T06:46:39.618075489Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 20 06:46:39.618226 containerd[2553]: time="2026-01-20T06:46:39.618106752Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 20 06:46:39.618226 containerd[2553]: time="2026-01-20T06:46:39.618124560Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 20 06:46:39.618338 containerd[2553]: time="2026-01-20T06:46:39.618329088Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 20 06:46:39.618363 containerd[2553]: time="2026-01-20T06:46:39.618356502Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 06:46:39.618431 containerd[2553]: time="2026-01-20T06:46:39.618422106Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 06:46:39.618465 containerd[2553]: time="2026-01-20T06:46:39.618458208Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 06:46:39.618631 containerd[2553]: time="2026-01-20T06:46:39.618619096Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 06:46:39.618668 containerd[2553]: time="2026-01-20T06:46:39.618660719Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 06:46:39.618700 containerd[2553]: time="2026-01-20T06:46:39.618692649Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 06:46:39.618726 containerd[2553]: time="2026-01-20T06:46:39.618720385Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 20 06:46:39.618852 containerd[2553]: time="2026-01-20T06:46:39.618844043Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 20 06:46:39.618882 containerd[2553]: time="2026-01-20T06:46:39.618876321Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 20 06:46:39.618957 containerd[2553]: time="2026-01-20T06:46:39.618949353Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 20 06:46:39.619096 containerd[2553]: time="2026-01-20T06:46:39.619088299Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 06:46:39.619142 containerd[2553]: time="2026-01-20T06:46:39.619132263Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 06:46:39.619172 containerd[2553]: time="2026-01-20T06:46:39.619166062Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 20 06:46:39.619514 containerd[2553]: time="2026-01-20T06:46:39.619236189Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 20 06:46:39.619514 containerd[2553]: time="2026-01-20T06:46:39.619409654Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 20 06:46:39.619514 containerd[2553]: time="2026-01-20T06:46:39.619447923Z" level=info msg="metadata content store policy set" policy=shared Jan 20 06:46:39.635133 containerd[2553]: time="2026-01-20T06:46:39.635102990Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 20 06:46:39.635189 containerd[2553]: time="2026-01-20T06:46:39.635146931Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 06:46:39.636027 containerd[2553]: time="2026-01-20T06:46:39.636005366Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 06:46:39.636027 containerd[2553]: time="2026-01-20T06:46:39.636021402Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 20 06:46:39.636071 containerd[2553]: time="2026-01-20T06:46:39.636034172Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 20 06:46:39.636071 containerd[2553]: time="2026-01-20T06:46:39.636046410Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 20 06:46:39.636071 containerd[2553]: time="2026-01-20T06:46:39.636060193Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 20 06:46:39.636071 containerd[2553]: time="2026-01-20T06:46:39.636069645Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 20 06:46:39.636147 containerd[2553]: time="2026-01-20T06:46:39.636079907Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 20 06:46:39.636147 containerd[2553]: time="2026-01-20T06:46:39.636089750Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 20 06:46:39.636147 containerd[2553]: time="2026-01-20T06:46:39.636100140Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 20 06:46:39.636147 containerd[2553]: time="2026-01-20T06:46:39.636109164Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 20 06:46:39.636147 containerd[2553]: time="2026-01-20T06:46:39.636119472Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 20 06:46:39.636147 containerd[2553]: time="2026-01-20T06:46:39.636133444Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 20 06:46:39.636252 containerd[2553]: time="2026-01-20T06:46:39.636234900Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 20 06:46:39.636272 containerd[2553]: time="2026-01-20T06:46:39.636250138Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 20 06:46:39.636288 containerd[2553]: time="2026-01-20T06:46:39.636268388Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 20 06:46:39.636288 containerd[2553]: time="2026-01-20T06:46:39.636278900Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 20 06:46:39.636332 containerd[2553]: time="2026-01-20T06:46:39.636287703Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 20 06:46:39.636332 containerd[2553]: time="2026-01-20T06:46:39.636296502Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 20 06:46:39.636332 containerd[2553]: time="2026-01-20T06:46:39.636305254Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 20 06:46:39.636332 containerd[2553]: time="2026-01-20T06:46:39.636313969Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 20 06:46:39.636332 containerd[2553]: time="2026-01-20T06:46:39.636323494Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 20 06:46:39.636406 containerd[2553]: time="2026-01-20T06:46:39.636332410Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 20 06:46:39.636406 containerd[2553]: time="2026-01-20T06:46:39.636341853Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 20 06:46:39.636406 containerd[2553]: time="2026-01-20T06:46:39.636367784Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 20 06:46:39.636452 containerd[2553]: time="2026-01-20T06:46:39.636412310Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 20 06:46:39.636452 containerd[2553]: time="2026-01-20T06:46:39.636422951Z" level=info msg="Start snapshots syncer" Jan 20 06:46:39.636452 containerd[2553]: time="2026-01-20T06:46:39.636448350Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 20 06:46:39.637404 containerd[2553]: time="2026-01-20T06:46:39.636745386Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 20 06:46:39.637404 containerd[2553]: time="2026-01-20T06:46:39.636788264Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 20 06:46:39.637556 containerd[2553]: time="2026-01-20T06:46:39.636831423Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 20 06:46:39.637556 containerd[2553]: time="2026-01-20T06:46:39.636932324Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 20 06:46:39.637556 containerd[2553]: time="2026-01-20T06:46:39.636951932Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 20 06:46:39.637556 containerd[2553]: time="2026-01-20T06:46:39.636968521Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 20 06:46:39.637556 containerd[2553]: time="2026-01-20T06:46:39.636977478Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 20 06:46:39.637556 containerd[2553]: time="2026-01-20T06:46:39.636986803Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 20 06:46:39.637556 containerd[2553]: time="2026-01-20T06:46:39.636995738Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 20 06:46:39.637556 containerd[2553]: time="2026-01-20T06:46:39.637004485Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 20 06:46:39.637556 containerd[2553]: time="2026-01-20T06:46:39.637012728Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 20 06:46:39.637556 containerd[2553]: time="2026-01-20T06:46:39.637020568Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 20 06:46:39.637556 containerd[2553]: time="2026-01-20T06:46:39.637086982Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 06:46:39.637556 containerd[2553]: time="2026-01-20T06:46:39.637108503Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 06:46:39.637556 containerd[2553]: time="2026-01-20T06:46:39.637116268Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 06:46:39.637766 containerd[2553]: time="2026-01-20T06:46:39.637124771Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 06:46:39.637766 containerd[2553]: time="2026-01-20T06:46:39.637131836Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 20 06:46:39.637766 containerd[2553]: time="2026-01-20T06:46:39.637143943Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 20 06:46:39.637766 containerd[2553]: time="2026-01-20T06:46:39.637153330Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 20 06:46:39.637766 containerd[2553]: time="2026-01-20T06:46:39.637166041Z" level=info msg="runtime interface created" Jan 20 06:46:39.637766 containerd[2553]: time="2026-01-20T06:46:39.637170338Z" level=info msg="created NRI interface" Jan 20 06:46:39.637766 containerd[2553]: time="2026-01-20T06:46:39.637177683Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 20 06:46:39.637766 containerd[2553]: time="2026-01-20T06:46:39.637186161Z" level=info msg="Connect containerd service" Jan 20 06:46:39.637766 containerd[2553]: time="2026-01-20T06:46:39.637203102Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 06:46:39.637913 containerd[2553]: time="2026-01-20T06:46:39.637823571Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 06:46:39.932810 kubelet[2656]: E0120 06:46:39.932756 2656 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 06:46:39.934096 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 06:46:39.934243 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 06:46:39.934608 systemd[1]: kubelet.service: Consumed 788ms CPU time, 264.6M memory peak. Jan 20 06:46:40.072857 containerd[2553]: time="2026-01-20T06:46:40.072133883Z" level=info msg="Start subscribing containerd event" Jan 20 06:46:40.072857 containerd[2553]: time="2026-01-20T06:46:40.072168621Z" level=info msg="Start recovering state" Jan 20 06:46:40.072857 containerd[2553]: time="2026-01-20T06:46:40.072263980Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 06:46:40.072857 containerd[2553]: time="2026-01-20T06:46:40.072297271Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 06:46:40.072857 containerd[2553]: time="2026-01-20T06:46:40.072354308Z" level=info msg="Start event monitor" Jan 20 06:46:40.072857 containerd[2553]: time="2026-01-20T06:46:40.072377154Z" level=info msg="Start cni network conf syncer for default" Jan 20 06:46:40.072857 containerd[2553]: time="2026-01-20T06:46:40.072384923Z" level=info msg="Start streaming server" Jan 20 06:46:40.072857 containerd[2553]: time="2026-01-20T06:46:40.072393405Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 20 06:46:40.072857 containerd[2553]: time="2026-01-20T06:46:40.072399690Z" level=info msg="runtime interface starting up..." Jan 20 06:46:40.072857 containerd[2553]: time="2026-01-20T06:46:40.072404873Z" level=info msg="starting plugins..." Jan 20 06:46:40.072857 containerd[2553]: time="2026-01-20T06:46:40.072414374Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 20 06:46:40.072857 containerd[2553]: time="2026-01-20T06:46:40.072507343Z" level=info msg="containerd successfully booted in 0.466637s" Jan 20 06:46:40.072644 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 06:46:40.074539 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 06:46:40.078608 systemd[1]: Startup finished in 4.072s (kernel) + 11.218s (initrd) + 14.150s (userspace) = 29.442s. Jan 20 06:46:40.351855 login[2646]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:46:40.356790 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 06:46:40.357508 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 06:46:40.363506 systemd-logind[2525]: New session 1 of user core. Jan 20 06:46:40.386499 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 06:46:40.388605 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 06:46:40.402662 (systemd)[2684]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:46:40.404590 systemd-logind[2525]: New session 2 of user core. Jan 20 06:46:40.429855 login[2647]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:46:40.444650 systemd-logind[2525]: New session 3 of user core. Jan 20 06:46:40.535402 waagent[2644]: 2026-01-20T06:46:40.535355Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jan 20 06:46:40.535792 waagent[2644]: 2026-01-20T06:46:40.535764Z INFO Daemon Daemon OS: flatcar 4585.0.0 Jan 20 06:46:40.535967 waagent[2644]: 2026-01-20T06:46:40.535946Z INFO Daemon Daemon Python: 3.11.13 Jan 20 06:46:40.536416 waagent[2644]: 2026-01-20T06:46:40.536344Z INFO Daemon Daemon Run daemon Jan 20 06:46:40.536764 waagent[2644]: 2026-01-20T06:46:40.536741Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4585.0.0' Jan 20 06:46:40.536970 waagent[2644]: 2026-01-20T06:46:40.536954Z INFO Daemon Daemon Using waagent for provisioning Jan 20 06:46:40.537678 waagent[2644]: 2026-01-20T06:46:40.537659Z INFO Daemon Daemon Activate resource disk Jan 20 06:46:40.537854 waagent[2644]: 2026-01-20T06:46:40.537837Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 20 06:46:40.539460 waagent[2644]: 2026-01-20T06:46:40.539432Z INFO Daemon Daemon Found device: None Jan 20 06:46:40.539758 waagent[2644]: 2026-01-20T06:46:40.539740Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 20 06:46:40.540051 waagent[2644]: 2026-01-20T06:46:40.540034Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 20 06:46:40.540953 waagent[2644]: 2026-01-20T06:46:40.540921Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 20 06:46:40.541142 waagent[2644]: 2026-01-20T06:46:40.541124Z INFO Daemon Daemon Running default provisioning handler Jan 20 06:46:40.547451 waagent[2644]: 2026-01-20T06:46:40.547415Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 20 06:46:40.548008 waagent[2644]: 2026-01-20T06:46:40.547982Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 20 06:46:40.548205 waagent[2644]: 2026-01-20T06:46:40.548188Z INFO Daemon Daemon cloud-init is enabled: False Jan 20 06:46:40.548474 waagent[2644]: 2026-01-20T06:46:40.548457Z INFO Daemon Daemon Copying ovf-env.xml Jan 20 06:46:40.597849 systemd[2684]: Queued start job for default target default.target. Jan 20 06:46:40.607967 systemd[2684]: Created slice app.slice - User Application Slice. Jan 20 06:46:40.607996 systemd[2684]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 20 06:46:40.608011 systemd[2684]: Reached target paths.target - Paths. Jan 20 06:46:40.608181 systemd[2684]: Reached target timers.target - Timers. Jan 20 06:46:40.608919 systemd[2684]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 06:46:40.609496 systemd[2684]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 20 06:46:40.618617 systemd[2684]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 06:46:40.618747 systemd[2684]: Reached target sockets.target - Sockets. Jan 20 06:46:40.619260 systemd[2684]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 20 06:46:40.619368 systemd[2684]: Reached target basic.target - Basic System. Jan 20 06:46:40.619464 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 06:46:40.620047 systemd[2684]: Reached target default.target - Main User Target. Jan 20 06:46:40.620073 systemd[2684]: Startup finished in 212ms. Jan 20 06:46:40.622389 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 06:46:40.622975 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 06:46:40.653464 waagent[2644]: 2026-01-20T06:46:40.653127Z INFO Daemon Daemon Successfully mounted dvd Jan 20 06:46:40.675507 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 20 06:46:40.678247 waagent[2644]: 2026-01-20T06:46:40.677370Z INFO Daemon Daemon Detect protocol endpoint Jan 20 06:46:40.678409 waagent[2644]: 2026-01-20T06:46:40.678346Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 20 06:46:40.679550 waagent[2644]: 2026-01-20T06:46:40.679526Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 20 06:46:40.680829 waagent[2644]: 2026-01-20T06:46:40.680775Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 20 06:46:40.681953 waagent[2644]: 2026-01-20T06:46:40.681929Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 20 06:46:40.682936 waagent[2644]: 2026-01-20T06:46:40.682915Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 20 06:46:40.694227 waagent[2644]: 2026-01-20T06:46:40.692455Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 20 06:46:40.694227 waagent[2644]: 2026-01-20T06:46:40.692699Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 20 06:46:40.694227 waagent[2644]: 2026-01-20T06:46:40.692856Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 20 06:46:40.793845 waagent[2644]: 2026-01-20T06:46:40.793801Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 20 06:46:40.795131 waagent[2644]: 2026-01-20T06:46:40.795103Z INFO Daemon Daemon Forcing an update of the goal state. Jan 20 06:46:40.798422 waagent[2644]: 2026-01-20T06:46:40.798394Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 20 06:46:40.816654 waagent[2644]: 2026-01-20T06:46:40.816630Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 20 06:46:40.818178 waagent[2644]: 2026-01-20T06:46:40.818142Z INFO Daemon Jan 20 06:46:40.818868 waagent[2644]: 2026-01-20T06:46:40.818838Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 2b855309-10c2-4609-b869-63f2c0dc6f8c eTag: 14515690915906941826 source: Fabric] Jan 20 06:46:40.821135 waagent[2644]: 2026-01-20T06:46:40.821105Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 20 06:46:40.822561 waagent[2644]: 2026-01-20T06:46:40.822537Z INFO Daemon Jan 20 06:46:40.823163 waagent[2644]: 2026-01-20T06:46:40.823140Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 20 06:46:40.826798 waagent[2644]: 2026-01-20T06:46:40.826776Z INFO Daemon Daemon Downloading artifacts profile blob Jan 20 06:46:40.952006 waagent[2644]: 2026-01-20T06:46:40.951941Z INFO Daemon Downloaded certificate {'thumbprint': '705045B09069FEF6D24BBB256B807346DC444528', 'hasPrivateKey': True} Jan 20 06:46:40.954025 waagent[2644]: 2026-01-20T06:46:40.953992Z INFO Daemon Fetch goal state completed Jan 20 06:46:40.991563 waagent[2644]: 2026-01-20T06:46:40.991509Z INFO Daemon Daemon Starting provisioning Jan 20 06:46:40.991811 waagent[2644]: 2026-01-20T06:46:40.991783Z INFO Daemon Daemon Handle ovf-env.xml. Jan 20 06:46:40.993179 waagent[2644]: 2026-01-20T06:46:40.992979Z INFO Daemon Daemon Set hostname [ci-4585.0.0-n-7cf3a16d5e] Jan 20 06:46:41.010531 waagent[2644]: 2026-01-20T06:46:41.010492Z INFO Daemon Daemon Publish hostname [ci-4585.0.0-n-7cf3a16d5e] Jan 20 06:46:41.011856 waagent[2644]: 2026-01-20T06:46:41.011821Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 20 06:46:41.013230 waagent[2644]: 2026-01-20T06:46:41.013195Z INFO Daemon Daemon Primary interface is [eth0] Jan 20 06:46:41.019110 systemd-networkd[2169]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 06:46:41.019116 systemd-networkd[2169]: eth0: Reconfiguring with /usr/lib/systemd/network/zz-default.network. Jan 20 06:46:41.019168 systemd-networkd[2169]: eth0: DHCP lease lost Jan 20 06:46:41.032301 waagent[2644]: 2026-01-20T06:46:41.032262Z INFO Daemon Daemon Create user account if not exists Jan 20 06:46:41.033200 waagent[2644]: 2026-01-20T06:46:41.032467Z INFO Daemon Daemon User core already exists, skip useradd Jan 20 06:46:41.033200 waagent[2644]: 2026-01-20T06:46:41.032637Z INFO Daemon Daemon Configure sudoer Jan 20 06:46:41.036933 waagent[2644]: 2026-01-20T06:46:41.036891Z INFO Daemon Daemon Configure sshd Jan 20 06:46:41.038259 systemd-networkd[2169]: eth0: DHCPv4 address 10.200.8.22/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 20 06:46:41.044837 waagent[2644]: 2026-01-20T06:46:41.044796Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 20 06:46:41.048102 waagent[2644]: 2026-01-20T06:46:41.044953Z INFO Daemon Daemon Deploy ssh public key. Jan 20 06:46:42.135471 waagent[2644]: 2026-01-20T06:46:42.135439Z INFO Daemon Daemon Provisioning complete Jan 20 06:46:42.143009 waagent[2644]: 2026-01-20T06:46:42.142983Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 20 06:46:42.144316 waagent[2644]: 2026-01-20T06:46:42.144289Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 20 06:46:42.146123 waagent[2644]: 2026-01-20T06:46:42.146032Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jan 20 06:46:42.234133 waagent[2739]: 2026-01-20T06:46:42.234072Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jan 20 06:46:42.234345 waagent[2739]: 2026-01-20T06:46:42.234153Z INFO ExtHandler ExtHandler OS: flatcar 4585.0.0 Jan 20 06:46:42.234345 waagent[2739]: 2026-01-20T06:46:42.234192Z INFO ExtHandler ExtHandler Python: 3.11.13 Jan 20 06:46:42.234345 waagent[2739]: 2026-01-20T06:46:42.234247Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Jan 20 06:46:42.271619 waagent[2739]: 2026-01-20T06:46:42.271572Z INFO ExtHandler ExtHandler Distro: flatcar-4585.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jan 20 06:46:42.271755 waagent[2739]: 2026-01-20T06:46:42.271728Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 20 06:46:42.271815 waagent[2739]: 2026-01-20T06:46:42.271782Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 20 06:46:42.276131 waagent[2739]: 2026-01-20T06:46:42.276084Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 20 06:46:42.282173 waagent[2739]: 2026-01-20T06:46:42.282146Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 20 06:46:42.282489 waagent[2739]: 2026-01-20T06:46:42.282462Z INFO ExtHandler Jan 20 06:46:42.282528 waagent[2739]: 2026-01-20T06:46:42.282510Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 04232248-8738-4340-accb-72645f057664 eTag: 14515690915906941826 source: Fabric] Jan 20 06:46:42.282705 waagent[2739]: 2026-01-20T06:46:42.282682Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 20 06:46:42.282994 waagent[2739]: 2026-01-20T06:46:42.282970Z INFO ExtHandler Jan 20 06:46:42.283026 waagent[2739]: 2026-01-20T06:46:42.283005Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 20 06:46:42.289682 waagent[2739]: 2026-01-20T06:46:42.289654Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 20 06:46:42.351543 waagent[2739]: 2026-01-20T06:46:42.351500Z INFO ExtHandler Downloaded certificate {'thumbprint': '705045B09069FEF6D24BBB256B807346DC444528', 'hasPrivateKey': True} Jan 20 06:46:42.351820 waagent[2739]: 2026-01-20T06:46:42.351795Z INFO ExtHandler Fetch goal state completed Jan 20 06:46:42.363239 waagent[2739]: 2026-01-20T06:46:42.363187Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.5.4 30 Sep 2025 (Library: OpenSSL 3.5.4 30 Sep 2025) Jan 20 06:46:42.366827 waagent[2739]: 2026-01-20T06:46:42.366782Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2739 Jan 20 06:46:42.366917 waagent[2739]: 2026-01-20T06:46:42.366883Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 20 06:46:42.367107 waagent[2739]: 2026-01-20T06:46:42.367089Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jan 20 06:46:42.367978 waagent[2739]: 2026-01-20T06:46:42.367947Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4585.0.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 20 06:46:42.368253 waagent[2739]: 2026-01-20T06:46:42.368223Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4585.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jan 20 06:46:42.368346 waagent[2739]: 2026-01-20T06:46:42.368329Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jan 20 06:46:42.368676 waagent[2739]: 2026-01-20T06:46:42.368656Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 20 06:46:42.401302 waagent[2739]: 2026-01-20T06:46:42.401246Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 20 06:46:42.401406 waagent[2739]: 2026-01-20T06:46:42.401383Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 20 06:46:42.405931 waagent[2739]: 2026-01-20T06:46:42.405631Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 20 06:46:42.410000 systemd[1]: Reload requested from client PID 2754 ('systemctl') (unit waagent.service)... Jan 20 06:46:42.410012 systemd[1]: Reloading... Jan 20 06:46:42.482225 zram_generator::config[2792]: No configuration found. Jan 20 06:46:42.639764 systemd[1]: Reloading finished in 229 ms. Jan 20 06:46:42.652259 waagent[2739]: 2026-01-20T06:46:42.650658Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 20 06:46:42.652259 waagent[2739]: 2026-01-20T06:46:42.650736Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 20 06:46:42.882892 waagent[2739]: 2026-01-20T06:46:42.882849Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 20 06:46:42.883064 waagent[2739]: 2026-01-20T06:46:42.883043Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jan 20 06:46:42.883612 waagent[2739]: 2026-01-20T06:46:42.883584Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 20 06:46:42.883755 waagent[2739]: 2026-01-20T06:46:42.883726Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 20 06:46:42.883799 waagent[2739]: 2026-01-20T06:46:42.883782Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 20 06:46:42.883935 waagent[2739]: 2026-01-20T06:46:42.883915Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 20 06:46:42.884120 waagent[2739]: 2026-01-20T06:46:42.884101Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 20 06:46:42.884227 waagent[2739]: 2026-01-20T06:46:42.884180Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 20 06:46:42.884284 waagent[2739]: 2026-01-20T06:46:42.884244Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 20 06:46:42.884564 waagent[2739]: 2026-01-20T06:46:42.884528Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 20 06:46:42.884650 waagent[2739]: 2026-01-20T06:46:42.884632Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 20 06:46:42.884974 waagent[2739]: 2026-01-20T06:46:42.884929Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 20 06:46:42.885011 waagent[2739]: 2026-01-20T06:46:42.884980Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 20 06:46:42.885034 waagent[2739]: 2026-01-20T06:46:42.885021Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 20 06:46:42.885134 waagent[2739]: 2026-01-20T06:46:42.885114Z INFO EnvHandler ExtHandler Configure routes Jan 20 06:46:42.885173 waagent[2739]: 2026-01-20T06:46:42.885156Z INFO EnvHandler ExtHandler Gateway:None Jan 20 06:46:42.885230 waagent[2739]: 2026-01-20T06:46:42.885191Z INFO EnvHandler ExtHandler Routes:None Jan 20 06:46:42.885302 waagent[2739]: 2026-01-20T06:46:42.885284Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 20 06:46:42.885302 waagent[2739]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 20 06:46:42.885302 waagent[2739]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jan 20 06:46:42.885302 waagent[2739]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 20 06:46:42.885302 waagent[2739]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 20 06:46:42.885302 waagent[2739]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 20 06:46:42.885302 waagent[2739]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 20 06:46:42.889609 waagent[2739]: 2026-01-20T06:46:42.889585Z INFO ExtHandler ExtHandler Jan 20 06:46:42.890001 waagent[2739]: 2026-01-20T06:46:42.889981Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 4d9cebae-f4a0-42b2-be25-a75fe9ef272f correlation aa6918b2-29f3-4591-854f-2265f63c5a1f created: 2026-01-20T06:45:52.078476Z] Jan 20 06:46:42.890225 waagent[2739]: 2026-01-20T06:46:42.890196Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 20 06:46:42.890551 waagent[2739]: 2026-01-20T06:46:42.890532Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jan 20 06:46:42.914100 waagent[2739]: 2026-01-20T06:46:42.914035Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jan 20 06:46:42.914100 waagent[2739]: Try `iptables -h' or 'iptables --help' for more information.) Jan 20 06:46:42.914404 waagent[2739]: 2026-01-20T06:46:42.914378Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: F7353368-9360-4888-AD5E-D4B9D1F4F806;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jan 20 06:46:42.951086 waagent[2739]: 2026-01-20T06:46:42.951048Z INFO MonitorHandler ExtHandler Network interfaces: Jan 20 06:46:42.951086 waagent[2739]: Executing ['ip', '-a', '-o', 'link']: Jan 20 06:46:42.951086 waagent[2739]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 20 06:46:42.951086 waagent[2739]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:68:cc:7c brd ff:ff:ff:ff:ff:ff\ alias Network Device\ altname enx000d3a68cc7c Jan 20 06:46:42.951086 waagent[2739]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:68:cc:7c brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Jan 20 06:46:42.951086 waagent[2739]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 20 06:46:42.951086 waagent[2739]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 20 06:46:42.951086 waagent[2739]: 2: eth0 inet 10.200.8.22/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 20 06:46:42.951086 waagent[2739]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 20 06:46:42.951086 waagent[2739]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 20 06:46:42.951086 waagent[2739]: 2: eth0 inet6 fe80::20d:3aff:fe68:cc7c/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 20 06:46:42.988439 waagent[2739]: 2026-01-20T06:46:42.988398Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jan 20 06:46:42.988439 waagent[2739]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 20 06:46:42.988439 waagent[2739]: pkts bytes target prot opt in out source destination Jan 20 06:46:42.988439 waagent[2739]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 20 06:46:42.988439 waagent[2739]: pkts bytes target prot opt in out source destination Jan 20 06:46:42.988439 waagent[2739]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 20 06:46:42.988439 waagent[2739]: pkts bytes target prot opt in out source destination Jan 20 06:46:42.988439 waagent[2739]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 20 06:46:42.988439 waagent[2739]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 20 06:46:42.988439 waagent[2739]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 20 06:46:42.990922 waagent[2739]: 2026-01-20T06:46:42.990883Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 20 06:46:42.990922 waagent[2739]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 20 06:46:42.990922 waagent[2739]: pkts bytes target prot opt in out source destination Jan 20 06:46:42.990922 waagent[2739]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 20 06:46:42.990922 waagent[2739]: pkts bytes target prot opt in out source destination Jan 20 06:46:42.990922 waagent[2739]: Chain OUTPUT (policy ACCEPT 2 packets, 104 bytes) Jan 20 06:46:42.990922 waagent[2739]: pkts bytes target prot opt in out source destination Jan 20 06:46:42.990922 waagent[2739]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 20 06:46:42.990922 waagent[2739]: 1 60 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 20 06:46:42.990922 waagent[2739]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 20 06:46:50.185046 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 06:46:50.186697 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:46:50.822134 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:46:50.829409 (kubelet)[2894]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 06:46:50.866083 kubelet[2894]: E0120 06:46:50.866041 2894 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 06:46:50.868860 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 06:46:50.869043 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 06:46:50.869445 systemd[1]: kubelet.service: Consumed 120ms CPU time, 109.5M memory peak. Jan 20 06:47:01.119452 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 06:47:01.120633 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:47:01.568870 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:47:01.571773 (kubelet)[2909]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 06:47:01.601831 kubelet[2909]: E0120 06:47:01.601799 2909 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 06:47:01.603247 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 06:47:01.603371 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 06:47:01.603724 systemd[1]: kubelet.service: Consumed 110ms CPU time, 108.8M memory peak. Jan 20 06:47:02.110889 chronyd[2501]: Selected source PHC0 Jan 20 06:47:08.378314 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 06:47:08.379279 systemd[1]: Started sshd@0-10.200.8.22:22-10.200.16.10:51394.service - OpenSSH per-connection server daemon (10.200.16.10:51394). Jan 20 06:47:09.110516 sshd[2917]: Accepted publickey for core from 10.200.16.10 port 51394 ssh2: RSA SHA256:A6dRZ9mR0riM04bsyfD02kXdpFK/aTrU0Lz3zu/8e+M Jan 20 06:47:09.111399 sshd-session[2917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:47:09.115229 systemd-logind[2525]: New session 4 of user core. Jan 20 06:47:09.124363 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 06:47:09.534278 systemd[1]: Started sshd@1-10.200.8.22:22-10.200.16.10:51408.service - OpenSSH per-connection server daemon (10.200.16.10:51408). Jan 20 06:47:10.085943 sshd[2924]: Accepted publickey for core from 10.200.16.10 port 51408 ssh2: RSA SHA256:A6dRZ9mR0riM04bsyfD02kXdpFK/aTrU0Lz3zu/8e+M Jan 20 06:47:10.086767 sshd-session[2924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:47:10.090295 systemd-logind[2525]: New session 5 of user core. Jan 20 06:47:10.100347 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 06:47:10.400320 sshd[2928]: Connection closed by 10.200.16.10 port 51408 Jan 20 06:47:10.401331 sshd-session[2924]: pam_unix(sshd:session): session closed for user core Jan 20 06:47:10.403384 systemd[1]: sshd@1-10.200.8.22:22-10.200.16.10:51408.service: Deactivated successfully. Jan 20 06:47:10.405151 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 06:47:10.405877 systemd-logind[2525]: Session 5 logged out. Waiting for processes to exit. Jan 20 06:47:10.406637 systemd-logind[2525]: Removed session 5. Jan 20 06:47:10.519025 systemd[1]: Started sshd@2-10.200.8.22:22-10.200.16.10:40484.service - OpenSSH per-connection server daemon (10.200.16.10:40484). Jan 20 06:47:11.071009 sshd[2934]: Accepted publickey for core from 10.200.16.10 port 40484 ssh2: RSA SHA256:A6dRZ9mR0riM04bsyfD02kXdpFK/aTrU0Lz3zu/8e+M Jan 20 06:47:11.071429 sshd-session[2934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:47:11.074939 systemd-logind[2525]: New session 6 of user core. Jan 20 06:47:11.077337 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 06:47:11.381887 sshd[2938]: Connection closed by 10.200.16.10 port 40484 Jan 20 06:47:11.382324 sshd-session[2934]: pam_unix(sshd:session): session closed for user core Jan 20 06:47:11.384771 systemd-logind[2525]: Session 6 logged out. Waiting for processes to exit. Jan 20 06:47:11.384990 systemd[1]: sshd@2-10.200.8.22:22-10.200.16.10:40484.service: Deactivated successfully. Jan 20 06:47:11.386144 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 06:47:11.387468 systemd-logind[2525]: Removed session 6. Jan 20 06:47:11.496009 systemd[1]: Started sshd@3-10.200.8.22:22-10.200.16.10:40490.service - OpenSSH per-connection server daemon (10.200.16.10:40490). Jan 20 06:47:11.603679 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 20 06:47:11.604943 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:47:12.049419 sshd[2944]: Accepted publickey for core from 10.200.16.10 port 40490 ssh2: RSA SHA256:A6dRZ9mR0riM04bsyfD02kXdpFK/aTrU0Lz3zu/8e+M Jan 20 06:47:12.050058 sshd-session[2944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:47:12.053562 systemd-logind[2525]: New session 7 of user core. Jan 20 06:47:12.064348 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 06:47:12.789607 sshd[2951]: Connection closed by 10.200.16.10 port 40490 Jan 20 06:47:12.790320 sshd-session[2944]: pam_unix(sshd:session): session closed for user core Jan 20 06:47:12.792259 systemd[1]: sshd@3-10.200.8.22:22-10.200.16.10:40490.service: Deactivated successfully. Jan 20 06:47:12.793893 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 06:47:12.794817 systemd-logind[2525]: Session 7 logged out. Waiting for processes to exit. Jan 20 06:47:12.795404 systemd-logind[2525]: Removed session 7. Jan 20 06:47:12.907034 systemd[1]: Started sshd@4-10.200.8.22:22-10.200.16.10:40494.service - OpenSSH per-connection server daemon (10.200.16.10:40494). Jan 20 06:47:13.458570 sshd[2957]: Accepted publickey for core from 10.200.16.10 port 40494 ssh2: RSA SHA256:A6dRZ9mR0riM04bsyfD02kXdpFK/aTrU0Lz3zu/8e+M Jan 20 06:47:13.459343 sshd-session[2957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:47:13.462873 systemd-logind[2525]: New session 8 of user core. Jan 20 06:47:13.464346 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 06:47:14.776422 sudo[2962]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 20 06:47:14.776631 sudo[2962]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 06:47:14.781922 sudo[2962]: pam_unix(sudo:session): session closed for user root Jan 20 06:47:14.886026 sshd[2961]: Connection closed by 10.200.16.10 port 40494 Jan 20 06:47:14.887285 sshd-session[2957]: pam_unix(sshd:session): session closed for user core Jan 20 06:47:14.889814 systemd[1]: sshd@4-10.200.8.22:22-10.200.16.10:40494.service: Deactivated successfully. Jan 20 06:47:14.891011 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 06:47:14.891669 systemd-logind[2525]: Session 8 logged out. Waiting for processes to exit. Jan 20 06:47:14.892768 systemd-logind[2525]: Removed session 8. Jan 20 06:47:15.013126 systemd[1]: Started sshd@5-10.200.8.22:22-10.200.16.10:40510.service - OpenSSH per-connection server daemon (10.200.16.10:40510). Jan 20 06:47:15.126980 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:47:15.129714 (kubelet)[2977]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 06:47:15.161358 kubelet[2977]: E0120 06:47:15.161312 2977 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 06:47:15.162570 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 06:47:15.162668 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 06:47:15.163072 systemd[1]: kubelet.service: Consumed 111ms CPU time, 107.7M memory peak. Jan 20 06:47:15.567490 sshd[2969]: Accepted publickey for core from 10.200.16.10 port 40510 ssh2: RSA SHA256:A6dRZ9mR0riM04bsyfD02kXdpFK/aTrU0Lz3zu/8e+M Jan 20 06:47:15.567880 sshd-session[2969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:47:15.571282 systemd-logind[2525]: New session 9 of user core. Jan 20 06:47:15.581359 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 06:47:15.778842 sudo[2987]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 20 06:47:15.779041 sudo[2987]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 06:47:15.783815 sudo[2987]: pam_unix(sudo:session): session closed for user root Jan 20 06:47:15.787604 sudo[2986]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 20 06:47:15.787815 sudo[2986]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 06:47:15.792678 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 06:47:15.827234 kernel: kauditd_printk_skb: 160 callbacks suppressed Jan 20 06:47:15.827294 kernel: audit: type=1305 audit(1768891635.825:260): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 20 06:47:15.825000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 20 06:47:15.828275 augenrules[3011]: No rules Jan 20 06:47:15.827897 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 06:47:15.828073 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 06:47:15.825000 audit[3011]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff378953b0 a2=420 a3=0 items=0 ppid=2992 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:15.830326 sudo[2986]: pam_unix(sudo:session): session closed for user root Jan 20 06:47:15.833405 kernel: audit: type=1300 audit(1768891635.825:260): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff378953b0 a2=420 a3=0 items=0 ppid=2992 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:15.825000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 06:47:15.835280 kernel: audit: type=1327 audit(1768891635.825:260): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 06:47:15.835312 kernel: audit: type=1130 audit(1768891635.828:261): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:15.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:15.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:15.840661 kernel: audit: type=1131 audit(1768891635.828:262): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:15.828000 audit[2986]: USER_END pid=2986 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 06:47:15.843729 kernel: audit: type=1106 audit(1768891635.828:263): pid=2986 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 06:47:15.828000 audit[2986]: CRED_DISP pid=2986 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 06:47:15.846438 kernel: audit: type=1104 audit(1768891635.828:264): pid=2986 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 06:47:15.935467 sshd[2985]: Connection closed by 10.200.16.10 port 40510 Jan 20 06:47:15.936336 sshd-session[2969]: pam_unix(sshd:session): session closed for user core Jan 20 06:47:15.937000 audit[2969]: USER_END pid=2969 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:47:15.938700 systemd[1]: sshd@5-10.200.8.22:22-10.200.16.10:40510.service: Deactivated successfully. Jan 20 06:47:15.940889 systemd-logind[2525]: Session 9 logged out. Waiting for processes to exit. Jan 20 06:47:15.947721 kernel: audit: type=1106 audit(1768891635.937:265): pid=2969 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:47:15.947763 kernel: audit: type=1104 audit(1768891635.937:266): pid=2969 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:47:15.937000 audit[2969]: CRED_DISP pid=2969 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:47:15.944236 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 06:47:15.945333 systemd-logind[2525]: Removed session 9. Jan 20 06:47:15.951627 kernel: audit: type=1131 audit(1768891635.937:267): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.8.22:22-10.200.16.10:40510 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:15.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.8.22:22-10.200.16.10:40510 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:16.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.22:22-10.200.16.10:40526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:16.068235 systemd[1]: Started sshd@6-10.200.8.22:22-10.200.16.10:40526.service - OpenSSH per-connection server daemon (10.200.16.10:40526). Jan 20 06:47:16.616000 audit[3020]: USER_ACCT pid=3020 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:47:16.617102 sshd[3020]: Accepted publickey for core from 10.200.16.10 port 40526 ssh2: RSA SHA256:A6dRZ9mR0riM04bsyfD02kXdpFK/aTrU0Lz3zu/8e+M Jan 20 06:47:16.617000 audit[3020]: CRED_ACQ pid=3020 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:47:16.617000 audit[3020]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff5bf4dbf0 a2=3 a3=0 items=0 ppid=1 pid=3020 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:16.617000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:47:16.618125 sshd-session[3020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:47:16.621724 systemd-logind[2525]: New session 10 of user core. Jan 20 06:47:16.632363 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 06:47:16.633000 audit[3020]: USER_START pid=3020 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:47:16.634000 audit[3024]: CRED_ACQ pid=3024 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:47:16.827000 audit[3025]: USER_ACCT pid=3025 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 06:47:16.828239 sudo[3025]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 06:47:16.828000 audit[3025]: CRED_REFR pid=3025 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 06:47:16.828000 audit[3025]: USER_START pid=3025 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 06:47:16.828434 sudo[3025]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 06:47:18.779797 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jan 20 06:47:18.894523 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 06:47:18.910428 (dockerd)[3044]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 06:47:19.533216 waagent[2739]: 2026-01-20T06:47:19.533181Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 20 06:47:19.538392 waagent[2739]: 2026-01-20T06:47:19.538368Z INFO ExtHandler Jan 20 06:47:19.538455 waagent[2739]: 2026-01-20T06:47:19.538435Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: eabb8685-cf37-4204-be2a-48e6a7c10097 eTag: 11280491931064323854 source: Fabric] Jan 20 06:47:19.538663 waagent[2739]: 2026-01-20T06:47:19.538642Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 20 06:47:19.538968 waagent[2739]: 2026-01-20T06:47:19.538944Z INFO ExtHandler Jan 20 06:47:19.538997 waagent[2739]: 2026-01-20T06:47:19.538983Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 20 06:47:19.578187 waagent[2739]: 2026-01-20T06:47:19.578157Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 20 06:47:19.636787 waagent[2739]: 2026-01-20T06:47:19.636749Z INFO ExtHandler Downloaded certificate {'thumbprint': '705045B09069FEF6D24BBB256B807346DC444528', 'hasPrivateKey': True} Jan 20 06:47:19.637076 waagent[2739]: 2026-01-20T06:47:19.637050Z INFO ExtHandler Fetch goal state completed Jan 20 06:47:19.637351 waagent[2739]: 2026-01-20T06:47:19.637330Z INFO ExtHandler ExtHandler Jan 20 06:47:19.637389 waagent[2739]: 2026-01-20T06:47:19.637370Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: b89cebe9-ccfb-4517-a946-8ba8ace58566 correlation aa6918b2-29f3-4591-854f-2265f63c5a1f created: 2026-01-20T06:47:10.859741Z] Jan 20 06:47:19.637566 waagent[2739]: 2026-01-20T06:47:19.637546Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 20 06:47:19.637932 waagent[2739]: 2026-01-20T06:47:19.637913Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 0 ms] Jan 20 06:47:20.359596 dockerd[3044]: time="2026-01-20T06:47:20.359552965Z" level=info msg="Starting up" Jan 20 06:47:20.360054 dockerd[3044]: time="2026-01-20T06:47:20.360034836Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 20 06:47:20.368293 dockerd[3044]: time="2026-01-20T06:47:20.368258537Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 20 06:47:20.825136 dockerd[3044]: time="2026-01-20T06:47:20.825064184Z" level=info msg="Loading containers: start." Jan 20 06:47:20.889230 kernel: Initializing XFRM netlink socket Jan 20 06:47:20.944288 kernel: kauditd_printk_skb: 11 callbacks suppressed Jan 20 06:47:20.944351 kernel: audit: type=1325 audit(1768891640.942:277): table=nat:5 family=2 entries=2 op=nft_register_chain pid=3095 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:20.942000 audit[3095]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=3095 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:20.942000 audit[3095]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff3a81f710 a2=0 a3=0 items=0 ppid=3044 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:20.951678 kernel: audit: type=1300 audit(1768891640.942:277): arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff3a81f710 a2=0 a3=0 items=0 ppid=3044 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:20.954095 kernel: audit: type=1327 audit(1768891640.942:277): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 20 06:47:20.942000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 20 06:47:20.956608 kernel: audit: type=1325 audit(1768891640.945:278): table=filter:6 family=2 entries=2 op=nft_register_chain pid=3097 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:20.945000 audit[3097]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=3097 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:20.961107 kernel: audit: type=1300 audit(1768891640.945:278): arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffff2437c00 a2=0 a3=0 items=0 ppid=3044 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:20.945000 audit[3097]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffff2437c00 a2=0 a3=0 items=0 ppid=3044 pid=3097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:20.945000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 20 06:47:20.966355 kernel: audit: type=1327 audit(1768891640.945:278): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 20 06:47:20.966410 kernel: audit: type=1325 audit(1768891640.951:279): table=filter:7 family=2 entries=1 op=nft_register_chain pid=3099 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:20.951000 audit[3099]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=3099 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:20.951000 audit[3099]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd215f57e0 a2=0 a3=0 items=0 ppid=3044 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:20.974463 kernel: audit: type=1300 audit(1768891640.951:279): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd215f57e0 a2=0 a3=0 items=0 ppid=3044 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:20.951000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 20 06:47:20.977008 kernel: audit: type=1327 audit(1768891640.951:279): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 20 06:47:20.954000 audit[3101]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=3101 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:20.979317 kernel: audit: type=1325 audit(1768891640.954:280): table=filter:8 family=2 entries=1 op=nft_register_chain pid=3101 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:20.954000 audit[3101]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe71601b50 a2=0 a3=0 items=0 ppid=3044 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:20.954000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 20 06:47:20.958000 audit[3103]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_chain pid=3103 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:20.958000 audit[3103]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff9f059410 a2=0 a3=0 items=0 ppid=3044 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:20.958000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 20 06:47:20.961000 audit[3105]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_chain pid=3105 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:20.961000 audit[3105]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcd0c800c0 a2=0 a3=0 items=0 ppid=3044 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:20.961000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 06:47:20.964000 audit[3107]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=3107 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:20.964000 audit[3107]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe17bdb5e0 a2=0 a3=0 items=0 ppid=3044 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:20.964000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 06:47:20.967000 audit[3109]: NETFILTER_CFG table=nat:12 family=2 entries=2 op=nft_register_chain pid=3109 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:20.967000 audit[3109]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe9f77e860 a2=0 a3=0 items=0 ppid=3044 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:20.967000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 20 06:47:21.078000 audit[3112]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=3112 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:21.078000 audit[3112]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffdb5d08420 a2=0 a3=0 items=0 ppid=3044 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:21.078000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 20 06:47:21.080000 audit[3114]: NETFILTER_CFG table=filter:14 family=2 entries=2 op=nft_register_chain pid=3114 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:21.080000 audit[3114]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd8d978e60 a2=0 a3=0 items=0 ppid=3044 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:21.080000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 20 06:47:21.081000 audit[3116]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=3116 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:21.081000 audit[3116]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffd21e8da70 a2=0 a3=0 items=0 ppid=3044 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:21.081000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 20 06:47:21.083000 audit[3118]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=3118 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:21.083000 audit[3118]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffcb7382c10 a2=0 a3=0 items=0 ppid=3044 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:21.083000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 06:47:21.085000 audit[3120]: NETFILTER_CFG table=filter:17 family=2 entries=1 op=nft_register_rule pid=3120 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:21.085000 audit[3120]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffffa8cca00 a2=0 a3=0 items=0 ppid=3044 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:21.085000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 20 06:47:21.141000 audit[3150]: NETFILTER_CFG table=nat:18 family=10 entries=2 op=nft_register_chain pid=3150 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:21.141000 audit[3150]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff20db3830 a2=0 a3=0 items=0 ppid=3044 pid=3150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:21.141000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 20 06:47:21.143000 audit[3152]: NETFILTER_CFG table=filter:19 family=10 entries=2 op=nft_register_chain pid=3152 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:21.143000 audit[3152]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff334d0fd0 a2=0 a3=0 items=0 ppid=3044 pid=3152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:21.143000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 20 06:47:21.144000 audit[3154]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=3154 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:21.144000 audit[3154]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff99788e50 a2=0 a3=0 items=0 ppid=3044 pid=3154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:21.144000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 20 06:47:21.146000 audit[3156]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=3156 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:21.146000 audit[3156]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff98296820 a2=0 a3=0 items=0 ppid=3044 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:21.146000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 20 06:47:21.147000 audit[3158]: NETFILTER_CFG table=filter:22 family=10 entries=1 op=nft_register_chain pid=3158 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:21.147000 audit[3158]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc69a671f0 a2=0 a3=0 items=0 ppid=3044 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:21.147000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 20 06:47:21.149000 audit[3160]: NETFILTER_CFG table=filter:23 family=10 entries=1 op=nft_register_chain pid=3160 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:21.149000 audit[3160]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe21fc8ef0 a2=0 a3=0 items=0 ppid=3044 pid=3160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:21.149000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 06:47:21.150000 audit[3162]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_chain pid=3162 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:21.150000 audit[3162]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffea1af84b0 a2=0 a3=0 items=0 ppid=3044 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:21.150000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 06:47:21.152000 audit[3164]: NETFILTER_CFG table=nat:25 family=10 entries=2 op=nft_register_chain pid=3164 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:21.152000 audit[3164]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffda31e8eb0 a2=0 a3=0 items=0 ppid=3044 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:21.152000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 20 06:47:21.154000 audit[3166]: NETFILTER_CFG table=nat:26 family=10 entries=2 op=nft_register_chain pid=3166 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:21.154000 audit[3166]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffc3b20eae0 a2=0 a3=0 items=0 ppid=3044 pid=3166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:21.154000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 20 06:47:21.155000 audit[3168]: NETFILTER_CFG table=filter:27 family=10 entries=2 op=nft_register_chain pid=3168 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:21.155000 audit[3168]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff2fcb6980 a2=0 a3=0 items=0 ppid=3044 pid=3168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:21.155000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 20 06:47:21.157000 audit[3170]: NETFILTER_CFG table=filter:28 family=10 entries=1 op=nft_register_rule pid=3170 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:21.157000 audit[3170]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffd64a542c0 a2=0 a3=0 items=0 ppid=3044 pid=3170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:21.157000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 20 06:47:21.158000 audit[3172]: NETFILTER_CFG table=filter:29 family=10 entries=1 op=nft_register_rule pid=3172 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:21.158000 audit[3172]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fffb76ad6c0 a2=0 a3=0 items=0 ppid=3044 pid=3172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:21.158000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 06:47:21.160000 audit[3174]: NETFILTER_CFG table=filter:30 family=10 entries=1 op=nft_register_rule pid=3174 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:21.160000 audit[3174]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fff8c7eeb50 a2=0 a3=0 items=0 ppid=3044 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:21.160000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 20 06:47:21.164000 audit[3179]: NETFILTER_CFG table=filter:31 family=2 entries=1 op=nft_register_chain pid=3179 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:21.164000 audit[3179]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd1ee49ac0 a2=0 a3=0 items=0 ppid=3044 pid=3179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:21.164000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 20 06:47:21.165000 audit[3181]: NETFILTER_CFG table=filter:32 family=2 entries=1 op=nft_register_rule pid=3181 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:21.165000 audit[3181]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fffc6ab6310 a2=0 a3=0 items=0 ppid=3044 pid=3181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:21.165000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 20 06:47:21.167000 audit[3183]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=3183 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:21.167000 audit[3183]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff44dece10 a2=0 a3=0 items=0 ppid=3044 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:21.167000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 20 06:47:21.168000 audit[3185]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=3185 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:21.168000 audit[3185]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd87d9cdd0 a2=0 a3=0 items=0 ppid=3044 pid=3185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:21.168000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 20 06:47:21.170000 audit[3187]: NETFILTER_CFG table=filter:35 family=10 entries=1 op=nft_register_rule pid=3187 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:21.170000 audit[3187]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe04b1dd90 a2=0 a3=0 items=0 ppid=3044 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:21.170000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 20 06:47:21.171000 audit[3189]: NETFILTER_CFG table=filter:36 family=10 entries=1 op=nft_register_rule pid=3189 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:21.171000 audit[3189]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffdca2f75e0 a2=0 a3=0 items=0 ppid=3044 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:21.171000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 20 06:47:21.211000 audit[3194]: NETFILTER_CFG table=nat:37 family=2 entries=2 op=nft_register_chain pid=3194 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:21.211000 audit[3194]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffd14eb5aa0 a2=0 a3=0 items=0 ppid=3044 pid=3194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:21.211000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 20 06:47:21.213000 audit[3196]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=3196 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:21.213000 audit[3196]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffea954a540 a2=0 a3=0 items=0 ppid=3044 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:21.213000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 20 06:47:21.219000 audit[3204]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=3204 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:21.219000 audit[3204]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffd94df0160 a2=0 a3=0 items=0 ppid=3044 pid=3204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:21.219000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 20 06:47:21.223000 audit[3209]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=3209 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:21.223000 audit[3209]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd06e0d4d0 a2=0 a3=0 items=0 ppid=3044 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:21.223000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 20 06:47:21.224000 audit[3211]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=3211 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:21.224000 audit[3211]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffd3c365190 a2=0 a3=0 items=0 ppid=3044 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:21.224000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 20 06:47:21.226000 audit[3213]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=3213 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:21.226000 audit[3213]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcf59eb3b0 a2=0 a3=0 items=0 ppid=3044 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:21.226000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 20 06:47:21.228000 audit[3215]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_rule pid=3215 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:21.228000 audit[3215]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffc13f7b100 a2=0 a3=0 items=0 ppid=3044 pid=3215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:21.228000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 06:47:21.229000 audit[3217]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_rule pid=3217 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:21.229000 audit[3217]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc1b8a5d50 a2=0 a3=0 items=0 ppid=3044 pid=3217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:21.229000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 20 06:47:21.231272 systemd-networkd[2169]: docker0: Link UP Jan 20 06:47:21.244668 dockerd[3044]: time="2026-01-20T06:47:21.244642008Z" level=info msg="Loading containers: done." Jan 20 06:47:21.336411 dockerd[3044]: time="2026-01-20T06:47:21.335740361Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 06:47:21.336411 dockerd[3044]: time="2026-01-20T06:47:21.335790695Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 20 06:47:21.336411 dockerd[3044]: time="2026-01-20T06:47:21.335846071Z" level=info msg="Initializing buildkit" Jan 20 06:47:21.374937 dockerd[3044]: time="2026-01-20T06:47:21.374901082Z" level=info msg="Completed buildkit initialization" Jan 20 06:47:21.377379 dockerd[3044]: time="2026-01-20T06:47:21.377358446Z" level=info msg="Daemon has completed initialization" Jan 20 06:47:21.377815 dockerd[3044]: time="2026-01-20T06:47:21.377425249Z" level=info msg="API listen on /run/docker.sock" Jan 20 06:47:21.377515 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 06:47:21.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:22.332421 containerd[2553]: time="2026-01-20T06:47:22.332375748Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 20 06:47:23.133469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2844143663.mount: Deactivated successfully. Jan 20 06:47:23.636941 update_engine[2526]: I20260120 06:47:23.636898 2526 update_attempter.cc:509] Updating boot flags... Jan 20 06:47:24.147153 containerd[2553]: time="2026-01-20T06:47:24.147116532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:47:24.149563 containerd[2553]: time="2026-01-20T06:47:24.149423158Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=27506175" Jan 20 06:47:24.152203 containerd[2553]: time="2026-01-20T06:47:24.152181266Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:47:24.155980 containerd[2553]: time="2026-01-20T06:47:24.155955890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:47:24.156886 containerd[2553]: time="2026-01-20T06:47:24.156859691Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 1.824434078s" Jan 20 06:47:24.157043 containerd[2553]: time="2026-01-20T06:47:24.156954249Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 20 06:47:24.157440 containerd[2553]: time="2026-01-20T06:47:24.157415820Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 20 06:47:25.353769 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 20 06:47:25.355945 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:47:25.869049 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:47:25.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:25.883423 (kubelet)[3346]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 06:47:25.912389 kubelet[3346]: E0120 06:47:25.912358 3346 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 06:47:25.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 06:47:25.913645 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 06:47:25.913729 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 06:47:25.914059 systemd[1]: kubelet.service: Consumed 116ms CPU time, 110.1M memory peak. Jan 20 06:47:25.924384 containerd[2553]: time="2026-01-20T06:47:25.924357008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:47:25.926833 containerd[2553]: time="2026-01-20T06:47:25.926803913Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 20 06:47:25.929352 containerd[2553]: time="2026-01-20T06:47:25.929319871Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:47:25.933726 containerd[2553]: time="2026-01-20T06:47:25.933230961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:47:25.934097 containerd[2553]: time="2026-01-20T06:47:25.934074446Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.776631279s" Jan 20 06:47:25.934170 containerd[2553]: time="2026-01-20T06:47:25.934155351Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 20 06:47:25.936851 containerd[2553]: time="2026-01-20T06:47:25.936835007Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 20 06:47:27.140219 containerd[2553]: time="2026-01-20T06:47:27.140190641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:47:27.142395 containerd[2553]: time="2026-01-20T06:47:27.142371466Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19396939" Jan 20 06:47:27.144959 containerd[2553]: time="2026-01-20T06:47:27.144925435Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:47:27.148316 containerd[2553]: time="2026-01-20T06:47:27.148290420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:47:27.148960 containerd[2553]: time="2026-01-20T06:47:27.148938318Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.211927138s" Jan 20 06:47:27.149003 containerd[2553]: time="2026-01-20T06:47:27.148965369Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 20 06:47:27.149523 containerd[2553]: time="2026-01-20T06:47:27.149452258Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 20 06:47:27.935580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3049854780.mount: Deactivated successfully. Jan 20 06:47:28.248320 containerd[2553]: time="2026-01-20T06:47:28.248258245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:47:28.250849 containerd[2553]: time="2026-01-20T06:47:28.250790240Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=0" Jan 20 06:47:28.253009 containerd[2553]: time="2026-01-20T06:47:28.252989862Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:47:28.256337 containerd[2553]: time="2026-01-20T06:47:28.255922817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:47:28.256337 containerd[2553]: time="2026-01-20T06:47:28.256121282Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 1.106525771s" Jan 20 06:47:28.256337 containerd[2553]: time="2026-01-20T06:47:28.256141061Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 20 06:47:28.256444 containerd[2553]: time="2026-01-20T06:47:28.256398944Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 20 06:47:28.887458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3554996674.mount: Deactivated successfully. Jan 20 06:47:29.609271 containerd[2553]: time="2026-01-20T06:47:29.609244898Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:47:29.611714 containerd[2553]: time="2026-01-20T06:47:29.611690350Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=173" Jan 20 06:47:29.614248 containerd[2553]: time="2026-01-20T06:47:29.614215872Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:47:29.617437 containerd[2553]: time="2026-01-20T06:47:29.617402953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:47:29.617984 containerd[2553]: time="2026-01-20T06:47:29.617889493Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.361470068s" Jan 20 06:47:29.617984 containerd[2553]: time="2026-01-20T06:47:29.617914040Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 20 06:47:29.618407 containerd[2553]: time="2026-01-20T06:47:29.618326460Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 20 06:47:30.130723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1830106254.mount: Deactivated successfully. Jan 20 06:47:30.145519 containerd[2553]: time="2026-01-20T06:47:30.145493775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 06:47:30.147885 containerd[2553]: time="2026-01-20T06:47:30.147865909Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=881" Jan 20 06:47:30.150986 containerd[2553]: time="2026-01-20T06:47:30.150955756Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 06:47:30.154244 containerd[2553]: time="2026-01-20T06:47:30.154201530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 06:47:30.154629 containerd[2553]: time="2026-01-20T06:47:30.154543270Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 536.196034ms" Jan 20 06:47:30.154629 containerd[2553]: time="2026-01-20T06:47:30.154565297Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 20 06:47:30.154898 containerd[2553]: time="2026-01-20T06:47:30.154875923Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 20 06:47:30.714081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount905806454.mount: Deactivated successfully. Jan 20 06:47:32.308503 containerd[2553]: time="2026-01-20T06:47:32.308472133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:47:32.310831 containerd[2553]: time="2026-01-20T06:47:32.310681290Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=45502580" Jan 20 06:47:32.313278 containerd[2553]: time="2026-01-20T06:47:32.313258134Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:47:32.316675 containerd[2553]: time="2026-01-20T06:47:32.316652746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:47:32.317293 containerd[2553]: time="2026-01-20T06:47:32.317271850Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.162375649s" Jan 20 06:47:32.317336 containerd[2553]: time="2026-01-20T06:47:32.317295471Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 20 06:47:34.355239 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:47:34.355393 systemd[1]: kubelet.service: Consumed 116ms CPU time, 110.1M memory peak. Jan 20 06:47:34.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:34.362250 kernel: kauditd_printk_skb: 113 callbacks suppressed Jan 20 06:47:34.362314 kernel: audit: type=1130 audit(1768891654.354:320): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:34.359444 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:47:34.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:34.363226 kernel: audit: type=1131 audit(1768891654.354:321): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:34.383259 systemd[1]: Reload requested from client PID 3507 ('systemctl') (unit session-10.scope)... Jan 20 06:47:34.383345 systemd[1]: Reloading... Jan 20 06:47:34.472229 zram_generator::config[3553]: No configuration found. Jan 20 06:47:34.648147 systemd[1]: Reloading finished in 264 ms. Jan 20 06:47:34.733137 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 20 06:47:34.733187 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 20 06:47:34.733552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:47:34.733609 systemd[1]: kubelet.service: Consumed 68ms CPU time, 74.3M memory peak. Jan 20 06:47:34.737343 kernel: audit: type=1130 audit(1768891654.732:322): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 06:47:34.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 06:47:34.738635 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:47:34.738000 audit: BPF prog-id=87 op=LOAD Jan 20 06:47:34.743577 kernel: audit: type=1334 audit(1768891654.738:323): prog-id=87 op=LOAD Jan 20 06:47:34.743627 kernel: audit: type=1334 audit(1768891654.738:324): prog-id=82 op=UNLOAD Jan 20 06:47:34.738000 audit: BPF prog-id=82 op=UNLOAD Jan 20 06:47:34.739000 audit: BPF prog-id=88 op=LOAD Jan 20 06:47:34.744735 kernel: audit: type=1334 audit(1768891654.739:325): prog-id=88 op=LOAD Jan 20 06:47:34.749755 kernel: audit: type=1334 audit(1768891654.739:326): prog-id=67 op=UNLOAD Jan 20 06:47:34.749808 kernel: audit: type=1334 audit(1768891654.739:327): prog-id=89 op=LOAD Jan 20 06:47:34.739000 audit: BPF prog-id=67 op=UNLOAD Jan 20 06:47:34.739000 audit: BPF prog-id=89 op=LOAD Jan 20 06:47:34.751902 kernel: audit: type=1334 audit(1768891654.739:328): prog-id=90 op=LOAD Jan 20 06:47:34.739000 audit: BPF prog-id=90 op=LOAD Jan 20 06:47:34.753236 kernel: audit: type=1334 audit(1768891654.739:329): prog-id=68 op=UNLOAD Jan 20 06:47:34.739000 audit: BPF prog-id=68 op=UNLOAD Jan 20 06:47:34.739000 audit: BPF prog-id=69 op=UNLOAD Jan 20 06:47:34.739000 audit: BPF prog-id=91 op=LOAD Jan 20 06:47:34.739000 audit: BPF prog-id=86 op=UNLOAD Jan 20 06:47:34.740000 audit: BPF prog-id=92 op=LOAD Jan 20 06:47:34.740000 audit: BPF prog-id=83 op=UNLOAD Jan 20 06:47:34.740000 audit: BPF prog-id=93 op=LOAD Jan 20 06:47:34.740000 audit: BPF prog-id=94 op=LOAD Jan 20 06:47:34.740000 audit: BPF prog-id=84 op=UNLOAD Jan 20 06:47:34.740000 audit: BPF prog-id=85 op=UNLOAD Jan 20 06:47:34.741000 audit: BPF prog-id=95 op=LOAD Jan 20 06:47:34.741000 audit: BPF prog-id=96 op=LOAD Jan 20 06:47:34.743000 audit: BPF prog-id=74 op=UNLOAD Jan 20 06:47:34.743000 audit: BPF prog-id=75 op=UNLOAD Jan 20 06:47:34.743000 audit: BPF prog-id=97 op=LOAD Jan 20 06:47:34.743000 audit: BPF prog-id=73 op=UNLOAD Jan 20 06:47:34.746000 audit: BPF prog-id=98 op=LOAD Jan 20 06:47:34.746000 audit: BPF prog-id=79 op=UNLOAD Jan 20 06:47:34.746000 audit: BPF prog-id=99 op=LOAD Jan 20 06:47:34.746000 audit: BPF prog-id=100 op=LOAD Jan 20 06:47:34.746000 audit: BPF prog-id=80 op=UNLOAD Jan 20 06:47:34.746000 audit: BPF prog-id=81 op=UNLOAD Jan 20 06:47:34.746000 audit: BPF prog-id=101 op=LOAD Jan 20 06:47:34.746000 audit: BPF prog-id=70 op=UNLOAD Jan 20 06:47:34.748000 audit: BPF prog-id=102 op=LOAD Jan 20 06:47:34.748000 audit: BPF prog-id=103 op=LOAD Jan 20 06:47:34.748000 audit: BPF prog-id=71 op=UNLOAD Jan 20 06:47:34.748000 audit: BPF prog-id=72 op=UNLOAD Jan 20 06:47:34.748000 audit: BPF prog-id=104 op=LOAD Jan 20 06:47:34.748000 audit: BPF prog-id=76 op=UNLOAD Jan 20 06:47:34.748000 audit: BPF prog-id=105 op=LOAD Jan 20 06:47:34.748000 audit: BPF prog-id=106 op=LOAD Jan 20 06:47:34.748000 audit: BPF prog-id=77 op=UNLOAD Jan 20 06:47:34.748000 audit: BPF prog-id=78 op=UNLOAD Jan 20 06:47:35.220039 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:47:35.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:35.223365 (kubelet)[3624]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 06:47:35.257705 kubelet[3624]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 06:47:35.257705 kubelet[3624]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 06:47:35.257705 kubelet[3624]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 06:47:35.257923 kubelet[3624]: I0120 06:47:35.257756 3624 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 06:47:35.345105 kubelet[3624]: I0120 06:47:35.345078 3624 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 06:47:35.345105 kubelet[3624]: I0120 06:47:35.345095 3624 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 06:47:35.345294 kubelet[3624]: I0120 06:47:35.345283 3624 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 06:47:35.368679 kubelet[3624]: I0120 06:47:35.368663 3624 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 06:47:35.369075 kubelet[3624]: E0120 06:47:35.369049 3624 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.22:6443: connect: connection refused" logger="UnhandledError" Jan 20 06:47:35.375768 kubelet[3624]: I0120 06:47:35.375726 3624 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 06:47:35.377415 kubelet[3624]: I0120 06:47:35.377403 3624 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 06:47:35.378556 kubelet[3624]: I0120 06:47:35.378527 3624 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 06:47:35.378684 kubelet[3624]: I0120 06:47:35.378555 3624 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4585.0.0-n-7cf3a16d5e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 06:47:35.378786 kubelet[3624]: I0120 06:47:35.378691 3624 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 06:47:35.378786 kubelet[3624]: I0120 06:47:35.378700 3624 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 06:47:35.378786 kubelet[3624]: I0120 06:47:35.378786 3624 state_mem.go:36] "Initialized new in-memory state store" Jan 20 06:47:35.381083 kubelet[3624]: I0120 06:47:35.381020 3624 kubelet.go:446] "Attempting to sync node with API server" Jan 20 06:47:35.381083 kubelet[3624]: I0120 06:47:35.381052 3624 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 06:47:35.381083 kubelet[3624]: I0120 06:47:35.381072 3624 kubelet.go:352] "Adding apiserver pod source" Jan 20 06:47:35.381172 kubelet[3624]: I0120 06:47:35.381093 3624 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 06:47:35.384475 kubelet[3624]: W0120 06:47:35.383925 3624 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4585.0.0-n-7cf3a16d5e&limit=500&resourceVersion=0": dial tcp 10.200.8.22:6443: connect: connection refused Jan 20 06:47:35.384475 kubelet[3624]: E0120 06:47:35.383974 3624 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4585.0.0-n-7cf3a16d5e&limit=500&resourceVersion=0\": dial tcp 10.200.8.22:6443: connect: connection refused" logger="UnhandledError" Jan 20 06:47:35.384475 kubelet[3624]: W0120 06:47:35.384266 3624 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.22:6443: connect: connection refused Jan 20 06:47:35.384475 kubelet[3624]: E0120 06:47:35.384296 3624 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.22:6443: connect: connection refused" logger="UnhandledError" Jan 20 06:47:35.384686 kubelet[3624]: I0120 06:47:35.384677 3624 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 06:47:35.385006 kubelet[3624]: I0120 06:47:35.384998 3624 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 06:47:35.385545 kubelet[3624]: W0120 06:47:35.385536 3624 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 06:47:35.387314 kubelet[3624]: I0120 06:47:35.387302 3624 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 06:47:35.387410 kubelet[3624]: I0120 06:47:35.387401 3624 server.go:1287] "Started kubelet" Jan 20 06:47:35.390692 kubelet[3624]: I0120 06:47:35.390136 3624 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 06:47:35.390873 kubelet[3624]: I0120 06:47:35.390852 3624 server.go:479] "Adding debug handlers to kubelet server" Jan 20 06:47:35.393361 kubelet[3624]: I0120 06:47:35.392957 3624 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 06:47:35.393361 kubelet[3624]: I0120 06:47:35.393140 3624 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 06:47:35.393361 kubelet[3624]: I0120 06:47:35.393266 3624 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 06:47:35.394743 kubelet[3624]: E0120 06:47:35.393271 3624 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.22:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.22:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4585.0.0-n-7cf3a16d5e.188c5d96ad848ac3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4585.0.0-n-7cf3a16d5e,UID:ci-4585.0.0-n-7cf3a16d5e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4585.0.0-n-7cf3a16d5e,},FirstTimestamp:2026-01-20 06:47:35.387384515 +0000 UTC m=+0.160809103,LastTimestamp:2026-01-20 06:47:35.387384515 +0000 UTC m=+0.160809103,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4585.0.0-n-7cf3a16d5e,}" Jan 20 06:47:35.394000 audit[3635]: NETFILTER_CFG table=mangle:45 family=2 entries=2 op=nft_register_chain pid=3635 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:35.394000 audit[3635]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdae1685c0 a2=0 a3=0 items=0 ppid=3624 pid=3635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:35.394000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 20 06:47:35.395000 audit[3636]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=3636 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:35.395000 audit[3636]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd530e3c0 a2=0 a3=0 items=0 ppid=3624 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:35.395000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 20 06:47:35.398448 kubelet[3624]: I0120 06:47:35.396959 3624 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 06:47:35.398448 kubelet[3624]: I0120 06:47:35.397765 3624 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 06:47:35.398448 kubelet[3624]: E0120 06:47:35.397933 3624 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4585.0.0-n-7cf3a16d5e\" not found" Jan 20 06:47:35.397000 audit[3638]: NETFILTER_CFG table=filter:47 family=2 entries=2 op=nft_register_chain pid=3638 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:35.397000 audit[3638]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff9aa43650 a2=0 a3=0 items=0 ppid=3624 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:35.397000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 06:47:35.400050 kubelet[3624]: I0120 06:47:35.400039 3624 factory.go:221] Registration of the systemd container factory successfully Jan 20 06:47:35.400196 kubelet[3624]: I0120 06:47:35.400184 3624 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 06:47:35.400268 kubelet[3624]: I0120 06:47:35.400257 3624 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 06:47:35.400303 kubelet[3624]: I0120 06:47:35.400296 3624 reconciler.go:26] "Reconciler: start to sync state" Jan 20 06:47:35.400595 kubelet[3624]: E0120 06:47:35.400579 3624 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4585.0.0-n-7cf3a16d5e?timeout=10s\": dial tcp 10.200.8.22:6443: connect: connection refused" interval="200ms" Jan 20 06:47:35.399000 audit[3640]: NETFILTER_CFG table=filter:48 family=2 entries=2 op=nft_register_chain pid=3640 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:35.399000 audit[3640]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe9e154b50 a2=0 a3=0 items=0 ppid=3624 pid=3640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:35.399000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 06:47:35.401837 kubelet[3624]: E0120 06:47:35.401825 3624 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 06:47:35.401968 kubelet[3624]: I0120 06:47:35.401961 3624 factory.go:221] Registration of the containerd container factory successfully Jan 20 06:47:35.404924 kubelet[3624]: W0120 06:47:35.404889 3624 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.22:6443: connect: connection refused Jan 20 06:47:35.406258 kubelet[3624]: E0120 06:47:35.406240 3624 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.22:6443: connect: connection refused" logger="UnhandledError" Jan 20 06:47:35.411000 audit[3645]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=3645 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:35.411000 audit[3645]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffe0971f240 a2=0 a3=0 items=0 ppid=3624 pid=3645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:35.411000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 20 06:47:35.413376 kubelet[3624]: I0120 06:47:35.413354 3624 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 06:47:35.412000 audit[3646]: NETFILTER_CFG table=mangle:50 family=10 entries=2 op=nft_register_chain pid=3646 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:35.412000 audit[3646]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcb936ba00 a2=0 a3=0 items=0 ppid=3624 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:35.412000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 20 06:47:35.413000 audit[3647]: NETFILTER_CFG table=mangle:51 family=2 entries=1 op=nft_register_chain pid=3647 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:35.413000 audit[3647]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdb359de10 a2=0 a3=0 items=0 ppid=3624 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:35.413000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 20 06:47:35.415000 audit[3648]: NETFILTER_CFG table=nat:52 family=2 entries=1 op=nft_register_chain pid=3648 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:35.415000 audit[3648]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcc4c06f60 a2=0 a3=0 items=0 ppid=3624 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:35.415000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 20 06:47:35.417088 kubelet[3624]: I0120 06:47:35.416978 3624 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 06:47:35.417088 kubelet[3624]: I0120 06:47:35.416992 3624 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 06:47:35.417088 kubelet[3624]: I0120 06:47:35.417003 3624 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 06:47:35.417088 kubelet[3624]: I0120 06:47:35.417008 3624 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 06:47:35.417088 kubelet[3624]: E0120 06:47:35.417036 3624 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 06:47:35.416000 audit[3649]: NETFILTER_CFG table=mangle:53 family=10 entries=1 op=nft_register_chain pid=3649 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:35.416000 audit[3650]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=3650 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:35.416000 audit[3650]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd7c280b10 a2=0 a3=0 items=0 ppid=3624 pid=3650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:35.416000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 20 06:47:35.416000 audit[3649]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe2da00740 a2=0 a3=0 items=0 ppid=3624 pid=3649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:35.416000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 20 06:47:35.418000 audit[3651]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=3651 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:35.418000 audit[3651]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdea960dc0 a2=0 a3=0 items=0 ppid=3624 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:35.418000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 20 06:47:35.419864 kubelet[3624]: W0120 06:47:35.419570 3624 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.22:6443: connect: connection refused Jan 20 06:47:35.419864 kubelet[3624]: E0120 06:47:35.419614 3624 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.22:6443: connect: connection refused" logger="UnhandledError" Jan 20 06:47:35.419864 kubelet[3624]: I0120 06:47:35.419675 3624 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 06:47:35.419864 kubelet[3624]: I0120 06:47:35.419680 3624 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 06:47:35.419864 kubelet[3624]: I0120 06:47:35.419692 3624 state_mem.go:36] "Initialized new in-memory state store" Jan 20 06:47:35.419000 audit[3652]: NETFILTER_CFG table=filter:56 family=10 entries=1 op=nft_register_chain pid=3652 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:35.419000 audit[3652]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff70ec1f90 a2=0 a3=0 items=0 ppid=3624 pid=3652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:35.419000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 20 06:47:35.425688 kubelet[3624]: I0120 06:47:35.425675 3624 policy_none.go:49] "None policy: Start" Jan 20 06:47:35.425688 kubelet[3624]: I0120 06:47:35.425689 3624 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 06:47:35.425764 kubelet[3624]: I0120 06:47:35.425698 3624 state_mem.go:35] "Initializing new in-memory state store" Jan 20 06:47:35.432577 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 06:47:35.443549 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 06:47:35.445721 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 06:47:35.455642 kubelet[3624]: I0120 06:47:35.455630 3624 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 06:47:35.455814 kubelet[3624]: I0120 06:47:35.455808 3624 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 06:47:35.456289 kubelet[3624]: I0120 06:47:35.456048 3624 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 06:47:35.456965 kubelet[3624]: I0120 06:47:35.456954 3624 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 06:47:35.457792 kubelet[3624]: E0120 06:47:35.457774 3624 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 06:47:35.457862 kubelet[3624]: E0120 06:47:35.457805 3624 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4585.0.0-n-7cf3a16d5e\" not found" Jan 20 06:47:35.524845 systemd[1]: Created slice kubepods-burstable-pod671488e151612fc67d9b2f8bd9fea161.slice - libcontainer container kubepods-burstable-pod671488e151612fc67d9b2f8bd9fea161.slice. Jan 20 06:47:35.531237 kubelet[3624]: E0120 06:47:35.531188 3624 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4585.0.0-n-7cf3a16d5e\" not found" node="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:35.532239 systemd[1]: Created slice kubepods-burstable-pode3a58c8faecf7bbdf2c5d21ac29d12b9.slice - libcontainer container kubepods-burstable-pode3a58c8faecf7bbdf2c5d21ac29d12b9.slice. Jan 20 06:47:35.547070 kubelet[3624]: E0120 06:47:35.546944 3624 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4585.0.0-n-7cf3a16d5e\" not found" node="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:35.548938 systemd[1]: Created slice kubepods-burstable-pod0cd9a718b651d8b90baf499067435603.slice - libcontainer container kubepods-burstable-pod0cd9a718b651d8b90baf499067435603.slice. Jan 20 06:47:35.550298 kubelet[3624]: E0120 06:47:35.550283 3624 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4585.0.0-n-7cf3a16d5e\" not found" node="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:35.557284 kubelet[3624]: I0120 06:47:35.557271 3624 kubelet_node_status.go:75] "Attempting to register node" node="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:35.557512 kubelet[3624]: E0120 06:47:35.557497 3624 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.22:6443/api/v1/nodes\": dial tcp 10.200.8.22:6443: connect: connection refused" node="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:35.601019 kubelet[3624]: E0120 06:47:35.600998 3624 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4585.0.0-n-7cf3a16d5e?timeout=10s\": dial tcp 10.200.8.22:6443: connect: connection refused" interval="400ms" Jan 20 06:47:35.701462 kubelet[3624]: I0120 06:47:35.701440 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/671488e151612fc67d9b2f8bd9fea161-ca-certs\") pod \"kube-apiserver-ci-4585.0.0-n-7cf3a16d5e\" (UID: \"671488e151612fc67d9b2f8bd9fea161\") " pod="kube-system/kube-apiserver-ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:35.701462 kubelet[3624]: I0120 06:47:35.701466 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/671488e151612fc67d9b2f8bd9fea161-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4585.0.0-n-7cf3a16d5e\" (UID: \"671488e151612fc67d9b2f8bd9fea161\") " pod="kube-system/kube-apiserver-ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:35.701553 kubelet[3624]: I0120 06:47:35.701481 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e3a58c8faecf7bbdf2c5d21ac29d12b9-ca-certs\") pod \"kube-controller-manager-ci-4585.0.0-n-7cf3a16d5e\" (UID: \"e3a58c8faecf7bbdf2c5d21ac29d12b9\") " pod="kube-system/kube-controller-manager-ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:35.701553 kubelet[3624]: I0120 06:47:35.701494 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e3a58c8faecf7bbdf2c5d21ac29d12b9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4585.0.0-n-7cf3a16d5e\" (UID: \"e3a58c8faecf7bbdf2c5d21ac29d12b9\") " pod="kube-system/kube-controller-manager-ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:35.701553 kubelet[3624]: I0120 06:47:35.701507 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/671488e151612fc67d9b2f8bd9fea161-k8s-certs\") pod \"kube-apiserver-ci-4585.0.0-n-7cf3a16d5e\" (UID: \"671488e151612fc67d9b2f8bd9fea161\") " pod="kube-system/kube-apiserver-ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:35.701553 kubelet[3624]: I0120 06:47:35.701521 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e3a58c8faecf7bbdf2c5d21ac29d12b9-flexvolume-dir\") pod \"kube-controller-manager-ci-4585.0.0-n-7cf3a16d5e\" (UID: \"e3a58c8faecf7bbdf2c5d21ac29d12b9\") " pod="kube-system/kube-controller-manager-ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:35.701553 kubelet[3624]: I0120 06:47:35.701533 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e3a58c8faecf7bbdf2c5d21ac29d12b9-k8s-certs\") pod \"kube-controller-manager-ci-4585.0.0-n-7cf3a16d5e\" (UID: \"e3a58c8faecf7bbdf2c5d21ac29d12b9\") " pod="kube-system/kube-controller-manager-ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:35.701664 kubelet[3624]: I0120 06:47:35.701549 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e3a58c8faecf7bbdf2c5d21ac29d12b9-kubeconfig\") pod \"kube-controller-manager-ci-4585.0.0-n-7cf3a16d5e\" (UID: \"e3a58c8faecf7bbdf2c5d21ac29d12b9\") " pod="kube-system/kube-controller-manager-ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:35.701664 kubelet[3624]: I0120 06:47:35.701565 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0cd9a718b651d8b90baf499067435603-kubeconfig\") pod \"kube-scheduler-ci-4585.0.0-n-7cf3a16d5e\" (UID: \"0cd9a718b651d8b90baf499067435603\") " pod="kube-system/kube-scheduler-ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:35.759357 kubelet[3624]: I0120 06:47:35.759346 3624 kubelet_node_status.go:75] "Attempting to register node" node="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:35.759566 kubelet[3624]: E0120 06:47:35.759552 3624 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.22:6443/api/v1/nodes\": dial tcp 10.200.8.22:6443: connect: connection refused" node="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:35.832611 containerd[2553]: time="2026-01-20T06:47:35.832544682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4585.0.0-n-7cf3a16d5e,Uid:671488e151612fc67d9b2f8bd9fea161,Namespace:kube-system,Attempt:0,}" Jan 20 06:47:35.847415 containerd[2553]: time="2026-01-20T06:47:35.847391383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4585.0.0-n-7cf3a16d5e,Uid:e3a58c8faecf7bbdf2c5d21ac29d12b9,Namespace:kube-system,Attempt:0,}" Jan 20 06:47:35.851253 containerd[2553]: time="2026-01-20T06:47:35.851219353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4585.0.0-n-7cf3a16d5e,Uid:0cd9a718b651d8b90baf499067435603,Namespace:kube-system,Attempt:0,}" Jan 20 06:47:35.910076 containerd[2553]: time="2026-01-20T06:47:35.910051701Z" level=info msg="connecting to shim a838be95adc96d9ba082d409d94f9881e27f3adc39909c03f5d35aadb5b23821" address="unix:///run/containerd/s/89ee8234bc0f751e336d16e73dd5b7ac1d2dd550b3399ac6af3cbb97acb50928" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:47:35.926560 systemd[1]: Started cri-containerd-a838be95adc96d9ba082d409d94f9881e27f3adc39909c03f5d35aadb5b23821.scope - libcontainer container a838be95adc96d9ba082d409d94f9881e27f3adc39909c03f5d35aadb5b23821. Jan 20 06:47:35.931270 containerd[2553]: time="2026-01-20T06:47:35.930448077Z" level=info msg="connecting to shim 6414b6526ee45fd8a12c339ec79882be11a40a1a6ceee68c733ed6b1fe47d91d" address="unix:///run/containerd/s/03d5dd968d63437fd81351875ffcc12fefab1004178913a3f604047ee874dc00" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:47:35.947000 audit: BPF prog-id=107 op=LOAD Jan 20 06:47:35.949000 audit: BPF prog-id=108 op=LOAD Jan 20 06:47:35.949000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3663 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:35.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138333862653935616463393664396261303832643430396439346639 Jan 20 06:47:35.949000 audit: BPF prog-id=108 op=UNLOAD Jan 20 06:47:35.949000 audit[3675]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3663 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:35.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138333862653935616463393664396261303832643430396439346639 Jan 20 06:47:35.949000 audit: BPF prog-id=109 op=LOAD Jan 20 06:47:35.949000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3663 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:35.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138333862653935616463393664396261303832643430396439346639 Jan 20 06:47:35.949000 audit: BPF prog-id=110 op=LOAD Jan 20 06:47:35.949000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3663 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:35.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138333862653935616463393664396261303832643430396439346639 Jan 20 06:47:35.949000 audit: BPF prog-id=110 op=UNLOAD Jan 20 06:47:35.949000 audit[3675]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3663 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:35.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138333862653935616463393664396261303832643430396439346639 Jan 20 06:47:35.949000 audit: BPF prog-id=109 op=UNLOAD Jan 20 06:47:35.949000 audit[3675]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3663 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:35.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138333862653935616463393664396261303832643430396439346639 Jan 20 06:47:35.949000 audit: BPF prog-id=111 op=LOAD Jan 20 06:47:35.949000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3663 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:35.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138333862653935616463393664396261303832643430396439346639 Jan 20 06:47:35.952808 containerd[2553]: time="2026-01-20T06:47:35.952779733Z" level=info msg="connecting to shim fc9b5f5a5a036e0b0d4d5893526e56deacd6901e8ead6789f5dd45869378fc92" address="unix:///run/containerd/s/4b8c61403bf6585e94897f593df04fd93d0dd5e9a4a88a2ffeb4057eb6de4495" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:47:35.964484 systemd[1]: Started cri-containerd-6414b6526ee45fd8a12c339ec79882be11a40a1a6ceee68c733ed6b1fe47d91d.scope - libcontainer container 6414b6526ee45fd8a12c339ec79882be11a40a1a6ceee68c733ed6b1fe47d91d. Jan 20 06:47:35.980352 systemd[1]: Started cri-containerd-fc9b5f5a5a036e0b0d4d5893526e56deacd6901e8ead6789f5dd45869378fc92.scope - libcontainer container fc9b5f5a5a036e0b0d4d5893526e56deacd6901e8ead6789f5dd45869378fc92. Jan 20 06:47:35.982000 audit: BPF prog-id=112 op=LOAD Jan 20 06:47:35.983000 audit: BPF prog-id=113 op=LOAD Jan 20 06:47:35.983000 audit[3714]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3695 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:35.983000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634313462363532366565343566643861313263333339656337393838 Jan 20 06:47:35.983000 audit: BPF prog-id=113 op=UNLOAD Jan 20 06:47:35.983000 audit[3714]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3695 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:35.983000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634313462363532366565343566643861313263333339656337393838 Jan 20 06:47:35.985000 audit: BPF prog-id=114 op=LOAD Jan 20 06:47:35.985000 audit[3714]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3695 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:35.985000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634313462363532366565343566643861313263333339656337393838 Jan 20 06:47:35.985000 audit: BPF prog-id=115 op=LOAD Jan 20 06:47:35.985000 audit[3714]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3695 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:35.985000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634313462363532366565343566643861313263333339656337393838 Jan 20 06:47:35.985000 audit: BPF prog-id=115 op=UNLOAD Jan 20 06:47:35.985000 audit[3714]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3695 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:35.985000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634313462363532366565343566643861313263333339656337393838 Jan 20 06:47:35.985000 audit: BPF prog-id=114 op=UNLOAD Jan 20 06:47:35.985000 audit[3714]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3695 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:35.985000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634313462363532366565343566643861313263333339656337393838 Jan 20 06:47:35.985000 audit: BPF prog-id=116 op=LOAD Jan 20 06:47:35.985000 audit[3714]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3695 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:35.985000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634313462363532366565343566643861313263333339656337393838 Jan 20 06:47:35.996375 containerd[2553]: time="2026-01-20T06:47:35.996349787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4585.0.0-n-7cf3a16d5e,Uid:671488e151612fc67d9b2f8bd9fea161,Namespace:kube-system,Attempt:0,} returns sandbox id \"a838be95adc96d9ba082d409d94f9881e27f3adc39909c03f5d35aadb5b23821\"" Jan 20 06:47:36.001228 containerd[2553]: time="2026-01-20T06:47:35.999900613Z" level=info msg="CreateContainer within sandbox \"a838be95adc96d9ba082d409d94f9881e27f3adc39909c03f5d35aadb5b23821\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 06:47:36.002156 kubelet[3624]: E0120 06:47:36.002130 3624 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4585.0.0-n-7cf3a16d5e?timeout=10s\": dial tcp 10.200.8.22:6443: connect: connection refused" interval="800ms" Jan 20 06:47:36.001000 audit: BPF prog-id=117 op=LOAD Jan 20 06:47:36.002000 audit: BPF prog-id=118 op=LOAD Jan 20 06:47:36.002000 audit[3745]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3730 pid=3745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:36.002000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663396235663561356130333665306230643464353839333532366535 Jan 20 06:47:36.002000 audit: BPF prog-id=118 op=UNLOAD Jan 20 06:47:36.002000 audit[3745]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3730 pid=3745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:36.002000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663396235663561356130333665306230643464353839333532366535 Jan 20 06:47:36.002000 audit: BPF prog-id=119 op=LOAD Jan 20 06:47:36.002000 audit[3745]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3730 pid=3745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:36.002000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663396235663561356130333665306230643464353839333532366535 Jan 20 06:47:36.002000 audit: BPF prog-id=120 op=LOAD Jan 20 06:47:36.002000 audit[3745]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3730 pid=3745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:36.002000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663396235663561356130333665306230643464353839333532366535 Jan 20 06:47:36.002000 audit: BPF prog-id=120 op=UNLOAD Jan 20 06:47:36.002000 audit[3745]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3730 pid=3745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:36.002000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663396235663561356130333665306230643464353839333532366535 Jan 20 06:47:36.002000 audit: BPF prog-id=119 op=UNLOAD Jan 20 06:47:36.002000 audit[3745]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3730 pid=3745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:36.002000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663396235663561356130333665306230643464353839333532366535 Jan 20 06:47:36.002000 audit: BPF prog-id=121 op=LOAD Jan 20 06:47:36.002000 audit[3745]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3730 pid=3745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:36.002000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663396235663561356130333665306230643464353839333532366535 Jan 20 06:47:36.016931 containerd[2553]: time="2026-01-20T06:47:36.016913787Z" level=info msg="Container 1aa8f6557440085e885bb305db5d838550bbd9f596b5f727bd36a105ae78d1b7: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:47:36.035668 containerd[2553]: time="2026-01-20T06:47:36.035601124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4585.0.0-n-7cf3a16d5e,Uid:e3a58c8faecf7bbdf2c5d21ac29d12b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"6414b6526ee45fd8a12c339ec79882be11a40a1a6ceee68c733ed6b1fe47d91d\"" Jan 20 06:47:36.036331 containerd[2553]: time="2026-01-20T06:47:36.036314153Z" level=info msg="CreateContainer within sandbox \"a838be95adc96d9ba082d409d94f9881e27f3adc39909c03f5d35aadb5b23821\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1aa8f6557440085e885bb305db5d838550bbd9f596b5f727bd36a105ae78d1b7\"" Jan 20 06:47:36.037823 containerd[2553]: time="2026-01-20T06:47:36.037803906Z" level=info msg="StartContainer for \"1aa8f6557440085e885bb305db5d838550bbd9f596b5f727bd36a105ae78d1b7\"" Jan 20 06:47:36.038840 containerd[2553]: time="2026-01-20T06:47:36.038821634Z" level=info msg="CreateContainer within sandbox \"6414b6526ee45fd8a12c339ec79882be11a40a1a6ceee68c733ed6b1fe47d91d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 06:47:36.040351 containerd[2553]: time="2026-01-20T06:47:36.040263762Z" level=info msg="connecting to shim 1aa8f6557440085e885bb305db5d838550bbd9f596b5f727bd36a105ae78d1b7" address="unix:///run/containerd/s/89ee8234bc0f751e336d16e73dd5b7ac1d2dd550b3399ac6af3cbb97acb50928" protocol=ttrpc version=3 Jan 20 06:47:36.041274 containerd[2553]: time="2026-01-20T06:47:36.041252652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4585.0.0-n-7cf3a16d5e,Uid:0cd9a718b651d8b90baf499067435603,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc9b5f5a5a036e0b0d4d5893526e56deacd6901e8ead6789f5dd45869378fc92\"" Jan 20 06:47:36.046264 containerd[2553]: time="2026-01-20T06:47:36.046245482Z" level=info msg="CreateContainer within sandbox \"fc9b5f5a5a036e0b0d4d5893526e56deacd6901e8ead6789f5dd45869378fc92\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 06:47:36.057351 systemd[1]: Started cri-containerd-1aa8f6557440085e885bb305db5d838550bbd9f596b5f727bd36a105ae78d1b7.scope - libcontainer container 1aa8f6557440085e885bb305db5d838550bbd9f596b5f727bd36a105ae78d1b7. Jan 20 06:47:36.066000 audit: BPF prog-id=122 op=LOAD Jan 20 06:47:36.068304 containerd[2553]: time="2026-01-20T06:47:36.068285877Z" level=info msg="Container a4a96a9ffd954569d1862cf84d3d6d756bd6f9196ec77e72cb56dca5cc0e5787: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:47:36.067000 audit: BPF prog-id=123 op=LOAD Jan 20 06:47:36.067000 audit[3790]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3663 pid=3790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:36.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161613866363535373434303038356538383562623330356462356438 Jan 20 06:47:36.067000 audit: BPF prog-id=123 op=UNLOAD Jan 20 06:47:36.067000 audit[3790]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3663 pid=3790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:36.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161613866363535373434303038356538383562623330356462356438 Jan 20 06:47:36.067000 audit: BPF prog-id=124 op=LOAD Jan 20 06:47:36.067000 audit[3790]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3663 pid=3790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:36.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161613866363535373434303038356538383562623330356462356438 Jan 20 06:47:36.067000 audit: BPF prog-id=125 op=LOAD Jan 20 06:47:36.067000 audit[3790]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3663 pid=3790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:36.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161613866363535373434303038356538383562623330356462356438 Jan 20 06:47:36.067000 audit: BPF prog-id=125 op=UNLOAD Jan 20 06:47:36.067000 audit[3790]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3663 pid=3790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:36.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161613866363535373434303038356538383562623330356462356438 Jan 20 06:47:36.067000 audit: BPF prog-id=124 op=UNLOAD Jan 20 06:47:36.067000 audit[3790]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3663 pid=3790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:36.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161613866363535373434303038356538383562623330356462356438 Jan 20 06:47:36.067000 audit: BPF prog-id=126 op=LOAD Jan 20 06:47:36.067000 audit[3790]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3663 pid=3790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:36.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161613866363535373434303038356538383562623330356462356438 Jan 20 06:47:36.074400 containerd[2553]: time="2026-01-20T06:47:36.074380889Z" level=info msg="Container d5d753f0e962942bf6a54030991b2f2c9de09eaaba0c8fb0f5b47065a218d5a1: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:47:36.088068 containerd[2553]: time="2026-01-20T06:47:36.087184892Z" level=info msg="CreateContainer within sandbox \"6414b6526ee45fd8a12c339ec79882be11a40a1a6ceee68c733ed6b1fe47d91d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a4a96a9ffd954569d1862cf84d3d6d756bd6f9196ec77e72cb56dca5cc0e5787\"" Jan 20 06:47:36.088068 containerd[2553]: time="2026-01-20T06:47:36.087931453Z" level=info msg="StartContainer for \"a4a96a9ffd954569d1862cf84d3d6d756bd6f9196ec77e72cb56dca5cc0e5787\"" Jan 20 06:47:36.089453 containerd[2553]: time="2026-01-20T06:47:36.089062171Z" level=info msg="connecting to shim a4a96a9ffd954569d1862cf84d3d6d756bd6f9196ec77e72cb56dca5cc0e5787" address="unix:///run/containerd/s/03d5dd968d63437fd81351875ffcc12fefab1004178913a3f604047ee874dc00" protocol=ttrpc version=3 Jan 20 06:47:36.097518 containerd[2553]: time="2026-01-20T06:47:36.097489882Z" level=info msg="CreateContainer within sandbox \"fc9b5f5a5a036e0b0d4d5893526e56deacd6901e8ead6789f5dd45869378fc92\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d5d753f0e962942bf6a54030991b2f2c9de09eaaba0c8fb0f5b47065a218d5a1\"" Jan 20 06:47:36.097848 containerd[2553]: time="2026-01-20T06:47:36.097822452Z" level=info msg="StartContainer for \"d5d753f0e962942bf6a54030991b2f2c9de09eaaba0c8fb0f5b47065a218d5a1\"" Jan 20 06:47:36.101263 containerd[2553]: time="2026-01-20T06:47:36.101235649Z" level=info msg="connecting to shim d5d753f0e962942bf6a54030991b2f2c9de09eaaba0c8fb0f5b47065a218d5a1" address="unix:///run/containerd/s/4b8c61403bf6585e94897f593df04fd93d0dd5e9a4a88a2ffeb4057eb6de4495" protocol=ttrpc version=3 Jan 20 06:47:36.106341 containerd[2553]: time="2026-01-20T06:47:36.106316935Z" level=info msg="StartContainer for \"1aa8f6557440085e885bb305db5d838550bbd9f596b5f727bd36a105ae78d1b7\" returns successfully" Jan 20 06:47:36.109378 systemd[1]: Started cri-containerd-a4a96a9ffd954569d1862cf84d3d6d756bd6f9196ec77e72cb56dca5cc0e5787.scope - libcontainer container a4a96a9ffd954569d1862cf84d3d6d756bd6f9196ec77e72cb56dca5cc0e5787. Jan 20 06:47:36.128671 systemd[1]: Started cri-containerd-d5d753f0e962942bf6a54030991b2f2c9de09eaaba0c8fb0f5b47065a218d5a1.scope - libcontainer container d5d753f0e962942bf6a54030991b2f2c9de09eaaba0c8fb0f5b47065a218d5a1. Jan 20 06:47:36.137000 audit: BPF prog-id=127 op=LOAD Jan 20 06:47:36.137000 audit: BPF prog-id=128 op=LOAD Jan 20 06:47:36.137000 audit[3812]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3695 pid=3812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:36.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134613936613966666439353435363964313836326366383464336436 Jan 20 06:47:36.137000 audit: BPF prog-id=128 op=UNLOAD Jan 20 06:47:36.137000 audit[3812]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3695 pid=3812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:36.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134613936613966666439353435363964313836326366383464336436 Jan 20 06:47:36.137000 audit: BPF prog-id=129 op=LOAD Jan 20 06:47:36.137000 audit[3812]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3695 pid=3812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:36.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134613936613966666439353435363964313836326366383464336436 Jan 20 06:47:36.137000 audit: BPF prog-id=130 op=LOAD Jan 20 06:47:36.137000 audit[3812]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3695 pid=3812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:36.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134613936613966666439353435363964313836326366383464336436 Jan 20 06:47:36.137000 audit: BPF prog-id=130 op=UNLOAD Jan 20 06:47:36.137000 audit[3812]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3695 pid=3812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:36.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134613936613966666439353435363964313836326366383464336436 Jan 20 06:47:36.137000 audit: BPF prog-id=129 op=UNLOAD Jan 20 06:47:36.137000 audit[3812]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3695 pid=3812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:36.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134613936613966666439353435363964313836326366383464336436 Jan 20 06:47:36.137000 audit: BPF prog-id=131 op=LOAD Jan 20 06:47:36.137000 audit[3812]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3695 pid=3812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:36.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134613936613966666439353435363964313836326366383464336436 Jan 20 06:47:36.154000 audit: BPF prog-id=132 op=LOAD Jan 20 06:47:36.154000 audit: BPF prog-id=133 op=LOAD Jan 20 06:47:36.154000 audit[3831]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3730 pid=3831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:36.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435643735336630653936323934326266366135343033303939316232 Jan 20 06:47:36.154000 audit: BPF prog-id=133 op=UNLOAD Jan 20 06:47:36.154000 audit[3831]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3730 pid=3831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:36.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435643735336630653936323934326266366135343033303939316232 Jan 20 06:47:36.154000 audit: BPF prog-id=134 op=LOAD Jan 20 06:47:36.154000 audit[3831]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3730 pid=3831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:36.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435643735336630653936323934326266366135343033303939316232 Jan 20 06:47:36.154000 audit: BPF prog-id=135 op=LOAD Jan 20 06:47:36.154000 audit[3831]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3730 pid=3831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:36.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435643735336630653936323934326266366135343033303939316232 Jan 20 06:47:36.154000 audit: BPF prog-id=135 op=UNLOAD Jan 20 06:47:36.154000 audit[3831]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3730 pid=3831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:36.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435643735336630653936323934326266366135343033303939316232 Jan 20 06:47:36.154000 audit: BPF prog-id=134 op=UNLOAD Jan 20 06:47:36.154000 audit[3831]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3730 pid=3831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:36.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435643735336630653936323934326266366135343033303939316232 Jan 20 06:47:36.154000 audit: BPF prog-id=136 op=LOAD Jan 20 06:47:36.154000 audit[3831]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3730 pid=3831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:36.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435643735336630653936323934326266366135343033303939316232 Jan 20 06:47:36.161506 kubelet[3624]: I0120 06:47:36.161492 3624 kubelet_node_status.go:75] "Attempting to register node" node="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:36.196567 containerd[2553]: time="2026-01-20T06:47:36.195361484Z" level=info msg="StartContainer for \"a4a96a9ffd954569d1862cf84d3d6d756bd6f9196ec77e72cb56dca5cc0e5787\" returns successfully" Jan 20 06:47:36.212331 containerd[2553]: time="2026-01-20T06:47:36.212311766Z" level=info msg="StartContainer for \"d5d753f0e962942bf6a54030991b2f2c9de09eaaba0c8fb0f5b47065a218d5a1\" returns successfully" Jan 20 06:47:36.426613 kubelet[3624]: E0120 06:47:36.426592 3624 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4585.0.0-n-7cf3a16d5e\" not found" node="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:36.428747 kubelet[3624]: E0120 06:47:36.428728 3624 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4585.0.0-n-7cf3a16d5e\" not found" node="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:36.432166 kubelet[3624]: E0120 06:47:36.432147 3624 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4585.0.0-n-7cf3a16d5e\" not found" node="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:37.433539 kubelet[3624]: E0120 06:47:37.433303 3624 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4585.0.0-n-7cf3a16d5e\" not found" node="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:37.434144 kubelet[3624]: E0120 06:47:37.434125 3624 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4585.0.0-n-7cf3a16d5e\" not found" node="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:37.716736 kubelet[3624]: E0120 06:47:37.716511 3624 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4585.0.0-n-7cf3a16d5e\" not found" node="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:37.916233 kubelet[3624]: I0120 06:47:37.915999 3624 kubelet_node_status.go:78] "Successfully registered node" node="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:37.916233 kubelet[3624]: E0120 06:47:37.916027 3624 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4585.0.0-n-7cf3a16d5e\": node \"ci-4585.0.0-n-7cf3a16d5e\" not found" Jan 20 06:47:37.998805 kubelet[3624]: I0120 06:47:37.998745 3624 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:38.074286 kubelet[3624]: E0120 06:47:38.074249 3624 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4585.0.0-n-7cf3a16d5e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:38.074286 kubelet[3624]: I0120 06:47:38.074272 3624 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:38.075753 kubelet[3624]: E0120 06:47:38.075735 3624 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4585.0.0-n-7cf3a16d5e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:38.075753 kubelet[3624]: I0120 06:47:38.075753 3624 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:38.077854 kubelet[3624]: E0120 06:47:38.077833 3624 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4585.0.0-n-7cf3a16d5e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:38.385870 kubelet[3624]: I0120 06:47:38.385854 3624 apiserver.go:52] "Watching apiserver" Jan 20 06:47:38.400543 kubelet[3624]: I0120 06:47:38.400529 3624 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 06:47:39.661857 systemd[1]: Reload requested from client PID 3888 ('systemctl') (unit session-10.scope)... Jan 20 06:47:39.661869 systemd[1]: Reloading... Jan 20 06:47:39.734232 zram_generator::config[3934]: No configuration found. Jan 20 06:47:39.912626 systemd[1]: Reloading finished in 250 ms. Jan 20 06:47:39.931291 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:47:39.944907 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 06:47:39.945121 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:47:39.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:39.945844 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 20 06:47:39.945884 kernel: audit: type=1131 audit(1768891659.943:424): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:39.946330 systemd[1]: kubelet.service: Consumed 415ms CPU time, 131.5M memory peak. Jan 20 06:47:39.949462 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 06:47:39.951603 kernel: audit: type=1334 audit(1768891659.948:425): prog-id=137 op=LOAD Jan 20 06:47:39.948000 audit: BPF prog-id=137 op=LOAD Jan 20 06:47:39.948000 audit: BPF prog-id=88 op=UNLOAD Jan 20 06:47:39.953461 kernel: audit: type=1334 audit(1768891659.948:426): prog-id=88 op=UNLOAD Jan 20 06:47:39.948000 audit: BPF prog-id=138 op=LOAD Jan 20 06:47:39.948000 audit: BPF prog-id=139 op=LOAD Jan 20 06:47:39.956511 kernel: audit: type=1334 audit(1768891659.948:427): prog-id=138 op=LOAD Jan 20 06:47:39.956557 kernel: audit: type=1334 audit(1768891659.948:428): prog-id=139 op=LOAD Jan 20 06:47:39.956634 kernel: audit: type=1334 audit(1768891659.948:429): prog-id=89 op=UNLOAD Jan 20 06:47:39.948000 audit: BPF prog-id=89 op=UNLOAD Jan 20 06:47:39.948000 audit: BPF prog-id=90 op=UNLOAD Jan 20 06:47:39.958344 kernel: audit: type=1334 audit(1768891659.948:430): prog-id=90 op=UNLOAD Jan 20 06:47:39.958384 kernel: audit: type=1334 audit(1768891659.949:431): prog-id=140 op=LOAD Jan 20 06:47:39.949000 audit: BPF prog-id=140 op=LOAD Jan 20 06:47:39.949000 audit: BPF prog-id=104 op=UNLOAD Jan 20 06:47:39.959770 kernel: audit: type=1334 audit(1768891659.949:432): prog-id=104 op=UNLOAD Jan 20 06:47:39.949000 audit: BPF prog-id=141 op=LOAD Jan 20 06:47:39.960579 kernel: audit: type=1334 audit(1768891659.949:433): prog-id=141 op=LOAD Jan 20 06:47:39.949000 audit: BPF prog-id=142 op=LOAD Jan 20 06:47:39.949000 audit: BPF prog-id=105 op=UNLOAD Jan 20 06:47:39.949000 audit: BPF prog-id=106 op=UNLOAD Jan 20 06:47:39.949000 audit: BPF prog-id=143 op=LOAD Jan 20 06:47:39.949000 audit: BPF prog-id=87 op=UNLOAD Jan 20 06:47:39.953000 audit: BPF prog-id=144 op=LOAD Jan 20 06:47:39.953000 audit: BPF prog-id=145 op=LOAD Jan 20 06:47:39.953000 audit: BPF prog-id=95 op=UNLOAD Jan 20 06:47:39.953000 audit: BPF prog-id=96 op=UNLOAD Jan 20 06:47:39.961000 audit: BPF prog-id=146 op=LOAD Jan 20 06:47:39.961000 audit: BPF prog-id=92 op=UNLOAD Jan 20 06:47:39.961000 audit: BPF prog-id=147 op=LOAD Jan 20 06:47:39.961000 audit: BPF prog-id=148 op=LOAD Jan 20 06:47:39.961000 audit: BPF prog-id=93 op=UNLOAD Jan 20 06:47:39.961000 audit: BPF prog-id=94 op=UNLOAD Jan 20 06:47:39.962000 audit: BPF prog-id=149 op=LOAD Jan 20 06:47:39.962000 audit: BPF prog-id=97 op=UNLOAD Jan 20 06:47:39.964000 audit: BPF prog-id=150 op=LOAD Jan 20 06:47:39.964000 audit: BPF prog-id=101 op=UNLOAD Jan 20 06:47:39.964000 audit: BPF prog-id=151 op=LOAD Jan 20 06:47:39.964000 audit: BPF prog-id=152 op=LOAD Jan 20 06:47:39.964000 audit: BPF prog-id=102 op=UNLOAD Jan 20 06:47:39.964000 audit: BPF prog-id=103 op=UNLOAD Jan 20 06:47:39.965000 audit: BPF prog-id=153 op=LOAD Jan 20 06:47:39.965000 audit: BPF prog-id=98 op=UNLOAD Jan 20 06:47:39.965000 audit: BPF prog-id=154 op=LOAD Jan 20 06:47:39.965000 audit: BPF prog-id=155 op=LOAD Jan 20 06:47:39.965000 audit: BPF prog-id=99 op=UNLOAD Jan 20 06:47:39.965000 audit: BPF prog-id=100 op=UNLOAD Jan 20 06:47:39.965000 audit: BPF prog-id=156 op=LOAD Jan 20 06:47:39.965000 audit: BPF prog-id=91 op=UNLOAD Jan 20 06:47:40.459072 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 06:47:40.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:40.467440 (kubelet)[4005]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 06:47:40.504318 kubelet[4005]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 06:47:40.504318 kubelet[4005]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 06:47:40.504318 kubelet[4005]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 06:47:40.504654 kubelet[4005]: I0120 06:47:40.504631 4005 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 06:47:40.509765 kubelet[4005]: I0120 06:47:40.509746 4005 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 06:47:40.509926 kubelet[4005]: I0120 06:47:40.509820 4005 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 06:47:40.510438 kubelet[4005]: I0120 06:47:40.510185 4005 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 06:47:40.513059 kubelet[4005]: I0120 06:47:40.513034 4005 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 20 06:47:40.515680 kubelet[4005]: I0120 06:47:40.515533 4005 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 06:47:40.519256 kubelet[4005]: I0120 06:47:40.519237 4005 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 06:47:40.522065 kubelet[4005]: I0120 06:47:40.522041 4005 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 06:47:40.522288 kubelet[4005]: I0120 06:47:40.522262 4005 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 06:47:40.522415 kubelet[4005]: I0120 06:47:40.522286 4005 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4585.0.0-n-7cf3a16d5e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 06:47:40.522501 kubelet[4005]: I0120 06:47:40.522421 4005 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 06:47:40.522501 kubelet[4005]: I0120 06:47:40.522429 4005 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 06:47:40.522501 kubelet[4005]: I0120 06:47:40.522467 4005 state_mem.go:36] "Initialized new in-memory state store" Jan 20 06:47:40.522756 kubelet[4005]: I0120 06:47:40.522572 4005 kubelet.go:446] "Attempting to sync node with API server" Jan 20 06:47:40.522756 kubelet[4005]: I0120 06:47:40.522592 4005 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 06:47:40.522756 kubelet[4005]: I0120 06:47:40.522610 4005 kubelet.go:352] "Adding apiserver pod source" Jan 20 06:47:40.522935 kubelet[4005]: I0120 06:47:40.522883 4005 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 06:47:40.524138 kubelet[4005]: I0120 06:47:40.524113 4005 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 06:47:40.524441 kubelet[4005]: I0120 06:47:40.524429 4005 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 06:47:40.524759 kubelet[4005]: I0120 06:47:40.524748 4005 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 06:47:40.524791 kubelet[4005]: I0120 06:47:40.524772 4005 server.go:1287] "Started kubelet" Jan 20 06:47:40.529236 kubelet[4005]: I0120 06:47:40.528604 4005 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 06:47:40.535165 kubelet[4005]: I0120 06:47:40.534912 4005 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 06:47:40.536832 kubelet[4005]: I0120 06:47:40.536816 4005 server.go:479] "Adding debug handlers to kubelet server" Jan 20 06:47:40.537483 kubelet[4005]: I0120 06:47:40.537474 4005 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 06:47:40.537676 kubelet[4005]: E0120 06:47:40.537667 4005 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4585.0.0-n-7cf3a16d5e\" not found" Jan 20 06:47:40.539513 kubelet[4005]: I0120 06:47:40.535186 4005 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 06:47:40.540030 kubelet[4005]: I0120 06:47:40.539991 4005 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 06:47:40.540155 kubelet[4005]: I0120 06:47:40.540144 4005 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 06:47:40.543400 kubelet[4005]: I0120 06:47:40.543389 4005 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 06:47:40.543545 kubelet[4005]: I0120 06:47:40.543540 4005 reconciler.go:26] "Reconciler: start to sync state" Jan 20 06:47:40.545196 kubelet[4005]: I0120 06:47:40.545177 4005 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 06:47:40.546259 kubelet[4005]: I0120 06:47:40.546204 4005 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 06:47:40.546352 kubelet[4005]: I0120 06:47:40.546345 4005 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 06:47:40.546664 kubelet[4005]: I0120 06:47:40.546655 4005 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 06:47:40.546708 kubelet[4005]: I0120 06:47:40.546703 4005 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 06:47:40.546774 kubelet[4005]: E0120 06:47:40.546765 4005 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 06:47:40.551652 kubelet[4005]: E0120 06:47:40.551637 4005 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 06:47:40.555302 kubelet[4005]: I0120 06:47:40.555282 4005 factory.go:221] Registration of the containerd container factory successfully Jan 20 06:47:40.555302 kubelet[4005]: I0120 06:47:40.555301 4005 factory.go:221] Registration of the systemd container factory successfully Jan 20 06:47:40.555728 kubelet[4005]: I0120 06:47:40.555378 4005 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 06:47:40.587146 kubelet[4005]: I0120 06:47:40.587135 4005 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 06:47:40.587223 kubelet[4005]: I0120 06:47:40.587205 4005 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 06:47:40.587263 kubelet[4005]: I0120 06:47:40.587259 4005 state_mem.go:36] "Initialized new in-memory state store" Jan 20 06:47:40.587362 kubelet[4005]: I0120 06:47:40.587357 4005 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 06:47:40.587391 kubelet[4005]: I0120 06:47:40.587382 4005 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 06:47:40.587425 kubelet[4005]: I0120 06:47:40.587408 4005 policy_none.go:49] "None policy: Start" Jan 20 06:47:40.587450 kubelet[4005]: I0120 06:47:40.587447 4005 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 06:47:40.587471 kubelet[4005]: I0120 06:47:40.587468 4005 state_mem.go:35] "Initializing new in-memory state store" Jan 20 06:47:40.587545 kubelet[4005]: I0120 06:47:40.587542 4005 state_mem.go:75] "Updated machine memory state" Jan 20 06:47:40.589986 kubelet[4005]: I0120 06:47:40.589975 4005 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 06:47:40.590236 kubelet[4005]: I0120 06:47:40.590226 4005 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 06:47:40.590280 kubelet[4005]: I0120 06:47:40.590240 4005 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 06:47:40.590463 kubelet[4005]: I0120 06:47:40.590456 4005 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 06:47:40.592156 kubelet[4005]: E0120 06:47:40.592143 4005 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 06:47:40.648058 kubelet[4005]: I0120 06:47:40.648034 4005 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:40.648225 kubelet[4005]: I0120 06:47:40.648066 4005 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:40.648290 kubelet[4005]: I0120 06:47:40.648110 4005 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:40.654188 kubelet[4005]: W0120 06:47:40.654174 4005 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 20 06:47:40.660484 kubelet[4005]: W0120 06:47:40.660365 4005 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 20 06:47:40.660548 kubelet[4005]: W0120 06:47:40.660515 4005 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 20 06:47:40.692730 kubelet[4005]: I0120 06:47:40.692357 4005 kubelet_node_status.go:75] "Attempting to register node" node="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:40.703250 kubelet[4005]: I0120 06:47:40.703236 4005 kubelet_node_status.go:124] "Node was previously registered" node="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:40.703360 kubelet[4005]: I0120 06:47:40.703353 4005 kubelet_node_status.go:78] "Successfully registered node" node="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:40.745079 kubelet[4005]: I0120 06:47:40.745029 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/671488e151612fc67d9b2f8bd9fea161-k8s-certs\") pod \"kube-apiserver-ci-4585.0.0-n-7cf3a16d5e\" (UID: \"671488e151612fc67d9b2f8bd9fea161\") " pod="kube-system/kube-apiserver-ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:40.746080 kubelet[4005]: I0120 06:47:40.745877 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e3a58c8faecf7bbdf2c5d21ac29d12b9-ca-certs\") pod \"kube-controller-manager-ci-4585.0.0-n-7cf3a16d5e\" (UID: \"e3a58c8faecf7bbdf2c5d21ac29d12b9\") " pod="kube-system/kube-controller-manager-ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:40.746080 kubelet[4005]: I0120 06:47:40.745899 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e3a58c8faecf7bbdf2c5d21ac29d12b9-k8s-certs\") pod \"kube-controller-manager-ci-4585.0.0-n-7cf3a16d5e\" (UID: \"e3a58c8faecf7bbdf2c5d21ac29d12b9\") " pod="kube-system/kube-controller-manager-ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:40.746080 kubelet[4005]: I0120 06:47:40.745916 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e3a58c8faecf7bbdf2c5d21ac29d12b9-kubeconfig\") pod \"kube-controller-manager-ci-4585.0.0-n-7cf3a16d5e\" (UID: \"e3a58c8faecf7bbdf2c5d21ac29d12b9\") " pod="kube-system/kube-controller-manager-ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:40.746080 kubelet[4005]: I0120 06:47:40.745931 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e3a58c8faecf7bbdf2c5d21ac29d12b9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4585.0.0-n-7cf3a16d5e\" (UID: \"e3a58c8faecf7bbdf2c5d21ac29d12b9\") " pod="kube-system/kube-controller-manager-ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:40.746080 kubelet[4005]: I0120 06:47:40.745948 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0cd9a718b651d8b90baf499067435603-kubeconfig\") pod \"kube-scheduler-ci-4585.0.0-n-7cf3a16d5e\" (UID: \"0cd9a718b651d8b90baf499067435603\") " pod="kube-system/kube-scheduler-ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:40.746195 kubelet[4005]: I0120 06:47:40.745962 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/671488e151612fc67d9b2f8bd9fea161-ca-certs\") pod \"kube-apiserver-ci-4585.0.0-n-7cf3a16d5e\" (UID: \"671488e151612fc67d9b2f8bd9fea161\") " pod="kube-system/kube-apiserver-ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:40.746195 kubelet[4005]: I0120 06:47:40.745976 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/671488e151612fc67d9b2f8bd9fea161-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4585.0.0-n-7cf3a16d5e\" (UID: \"671488e151612fc67d9b2f8bd9fea161\") " pod="kube-system/kube-apiserver-ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:40.746195 kubelet[4005]: I0120 06:47:40.745991 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e3a58c8faecf7bbdf2c5d21ac29d12b9-flexvolume-dir\") pod \"kube-controller-manager-ci-4585.0.0-n-7cf3a16d5e\" (UID: \"e3a58c8faecf7bbdf2c5d21ac29d12b9\") " pod="kube-system/kube-controller-manager-ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:41.529239 kubelet[4005]: I0120 06:47:41.529107 4005 apiserver.go:52] "Watching apiserver" Jan 20 06:47:41.544358 kubelet[4005]: I0120 06:47:41.544341 4005 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 06:47:41.573879 kubelet[4005]: I0120 06:47:41.573639 4005 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:41.580767 kubelet[4005]: W0120 06:47:41.580734 4005 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 20 06:47:41.580835 kubelet[4005]: E0120 06:47:41.580785 4005 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4585.0.0-n-7cf3a16d5e\" already exists" pod="kube-system/kube-apiserver-ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:47:41.588086 kubelet[4005]: I0120 06:47:41.588046 4005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4585.0.0-n-7cf3a16d5e" podStartSLOduration=1.588036574 podStartE2EDuration="1.588036574s" podCreationTimestamp="2026-01-20 06:47:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 06:47:41.58800371 +0000 UTC m=+1.117824105" watchObservedRunningTime="2026-01-20 06:47:41.588036574 +0000 UTC m=+1.117856973" Jan 20 06:47:41.605224 kubelet[4005]: I0120 06:47:41.605114 4005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4585.0.0-n-7cf3a16d5e" podStartSLOduration=1.6051019100000001 podStartE2EDuration="1.60510191s" podCreationTimestamp="2026-01-20 06:47:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 06:47:41.595707418 +0000 UTC m=+1.125527815" watchObservedRunningTime="2026-01-20 06:47:41.60510191 +0000 UTC m=+1.134922356" Jan 20 06:47:41.605357 kubelet[4005]: I0120 06:47:41.605310 4005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4585.0.0-n-7cf3a16d5e" podStartSLOduration=1.605301024 podStartE2EDuration="1.605301024s" podCreationTimestamp="2026-01-20 06:47:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 06:47:41.603923108 +0000 UTC m=+1.133743506" watchObservedRunningTime="2026-01-20 06:47:41.605301024 +0000 UTC m=+1.135121435" Jan 20 06:47:44.789606 kubelet[4005]: I0120 06:47:44.789584 4005 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 06:47:44.789947 containerd[2553]: time="2026-01-20T06:47:44.789869202Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 06:47:44.790157 kubelet[4005]: I0120 06:47:44.790060 4005 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 06:47:45.777820 systemd[1]: Created slice kubepods-besteffort-podcaed3bd2_00bf_41b5_8db6_12c7d756cac5.slice - libcontainer container kubepods-besteffort-podcaed3bd2_00bf_41b5_8db6_12c7d756cac5.slice. Jan 20 06:47:45.778498 kubelet[4005]: I0120 06:47:45.778471 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llv5t\" (UniqueName: \"kubernetes.io/projected/caed3bd2-00bf-41b5-8db6-12c7d756cac5-kube-api-access-llv5t\") pod \"kube-proxy-d4vg7\" (UID: \"caed3bd2-00bf-41b5-8db6-12c7d756cac5\") " pod="kube-system/kube-proxy-d4vg7" Jan 20 06:47:45.778568 kubelet[4005]: I0120 06:47:45.778521 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/caed3bd2-00bf-41b5-8db6-12c7d756cac5-kube-proxy\") pod \"kube-proxy-d4vg7\" (UID: \"caed3bd2-00bf-41b5-8db6-12c7d756cac5\") " pod="kube-system/kube-proxy-d4vg7" Jan 20 06:47:45.778881 kubelet[4005]: I0120 06:47:45.778814 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/caed3bd2-00bf-41b5-8db6-12c7d756cac5-xtables-lock\") pod \"kube-proxy-d4vg7\" (UID: \"caed3bd2-00bf-41b5-8db6-12c7d756cac5\") " pod="kube-system/kube-proxy-d4vg7" Jan 20 06:47:45.778881 kubelet[4005]: I0120 06:47:45.778836 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/caed3bd2-00bf-41b5-8db6-12c7d756cac5-lib-modules\") pod \"kube-proxy-d4vg7\" (UID: \"caed3bd2-00bf-41b5-8db6-12c7d756cac5\") " pod="kube-system/kube-proxy-d4vg7" Jan 20 06:47:45.892069 systemd[1]: Created slice kubepods-besteffort-podfa77bde9_32af_4e39_8a7d_b5b1fc887f04.slice - libcontainer container kubepods-besteffort-podfa77bde9_32af_4e39_8a7d_b5b1fc887f04.slice. Jan 20 06:47:45.979782 kubelet[4005]: I0120 06:47:45.979750 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz5zl\" (UniqueName: \"kubernetes.io/projected/fa77bde9-32af-4e39-8a7d-b5b1fc887f04-kube-api-access-lz5zl\") pod \"tigera-operator-7dcd859c48-2kkx9\" (UID: \"fa77bde9-32af-4e39-8a7d-b5b1fc887f04\") " pod="tigera-operator/tigera-operator-7dcd859c48-2kkx9" Jan 20 06:47:45.979971 kubelet[4005]: I0120 06:47:45.979786 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fa77bde9-32af-4e39-8a7d-b5b1fc887f04-var-lib-calico\") pod \"tigera-operator-7dcd859c48-2kkx9\" (UID: \"fa77bde9-32af-4e39-8a7d-b5b1fc887f04\") " pod="tigera-operator/tigera-operator-7dcd859c48-2kkx9" Jan 20 06:47:46.088611 containerd[2553]: time="2026-01-20T06:47:46.088543703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d4vg7,Uid:caed3bd2-00bf-41b5-8db6-12c7d756cac5,Namespace:kube-system,Attempt:0,}" Jan 20 06:47:46.127836 containerd[2553]: time="2026-01-20T06:47:46.127785506Z" level=info msg="connecting to shim bc316ac17c9ad051ab249ce3a5d85e73f86a1430c02185b4ba2d9c50650296b2" address="unix:///run/containerd/s/797732e50291e7de9d2693a932fe1c9cbb38bce510350ce5f75648f6fc93d645" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:47:46.147383 systemd[1]: Started cri-containerd-bc316ac17c9ad051ab249ce3a5d85e73f86a1430c02185b4ba2d9c50650296b2.scope - libcontainer container bc316ac17c9ad051ab249ce3a5d85e73f86a1430c02185b4ba2d9c50650296b2. Jan 20 06:47:46.153000 audit: BPF prog-id=157 op=LOAD Jan 20 06:47:46.155656 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 20 06:47:46.155702 kernel: audit: type=1334 audit(1768891666.153:466): prog-id=157 op=LOAD Jan 20 06:47:46.154000 audit: BPF prog-id=158 op=LOAD Jan 20 06:47:46.158431 kernel: audit: type=1334 audit(1768891666.154:467): prog-id=158 op=LOAD Jan 20 06:47:46.162058 kernel: audit: type=1300 audit(1768891666.154:467): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4057 pid=4068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.154000 audit[4068]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4057 pid=4068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263333136616331376339616430353161623234396365336135643835 Jan 20 06:47:46.172202 kernel: audit: type=1327 audit(1768891666.154:467): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263333136616331376339616430353161623234396365336135643835 Jan 20 06:47:46.172283 kernel: audit: type=1334 audit(1768891666.154:468): prog-id=158 op=UNLOAD Jan 20 06:47:46.154000 audit: BPF prog-id=158 op=UNLOAD Jan 20 06:47:46.154000 audit[4068]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4057 pid=4068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263333136616331376339616430353161623234396365336135643835 Jan 20 06:47:46.180927 kernel: audit: type=1300 audit(1768891666.154:468): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4057 pid=4068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.181049 kernel: audit: type=1327 audit(1768891666.154:468): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263333136616331376339616430353161623234396365336135643835 Jan 20 06:47:46.154000 audit: BPF prog-id=159 op=LOAD Jan 20 06:47:46.182779 kernel: audit: type=1334 audit(1768891666.154:469): prog-id=159 op=LOAD Jan 20 06:47:46.154000 audit[4068]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4057 pid=4068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.186736 containerd[2553]: time="2026-01-20T06:47:46.186691071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d4vg7,Uid:caed3bd2-00bf-41b5-8db6-12c7d756cac5,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc316ac17c9ad051ab249ce3a5d85e73f86a1430c02185b4ba2d9c50650296b2\"" Jan 20 06:47:46.187775 kernel: audit: type=1300 audit(1768891666.154:469): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4057 pid=4068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263333136616331376339616430353161623234396365336135643835 Jan 20 06:47:46.190825 containerd[2553]: time="2026-01-20T06:47:46.190754175Z" level=info msg="CreateContainer within sandbox \"bc316ac17c9ad051ab249ce3a5d85e73f86a1430c02185b4ba2d9c50650296b2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 06:47:46.193659 kernel: audit: type=1327 audit(1768891666.154:469): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263333136616331376339616430353161623234396365336135643835 Jan 20 06:47:46.154000 audit: BPF prog-id=160 op=LOAD Jan 20 06:47:46.154000 audit[4068]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4057 pid=4068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263333136616331376339616430353161623234396365336135643835 Jan 20 06:47:46.154000 audit: BPF prog-id=160 op=UNLOAD Jan 20 06:47:46.154000 audit[4068]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4057 pid=4068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263333136616331376339616430353161623234396365336135643835 Jan 20 06:47:46.154000 audit: BPF prog-id=159 op=UNLOAD Jan 20 06:47:46.154000 audit[4068]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4057 pid=4068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263333136616331376339616430353161623234396365336135643835 Jan 20 06:47:46.154000 audit: BPF prog-id=161 op=LOAD Jan 20 06:47:46.154000 audit[4068]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4057 pid=4068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263333136616331376339616430353161623234396365336135643835 Jan 20 06:47:46.196667 containerd[2553]: time="2026-01-20T06:47:46.196550496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-2kkx9,Uid:fa77bde9-32af-4e39-8a7d-b5b1fc887f04,Namespace:tigera-operator,Attempt:0,}" Jan 20 06:47:46.220985 containerd[2553]: time="2026-01-20T06:47:46.220967155Z" level=info msg="Container 3fc7a267601db5a8387aa91702b0eb6d7c3b6a3212a816542cf1350008c52a82: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:47:46.276924 containerd[2553]: time="2026-01-20T06:47:46.276889406Z" level=info msg="CreateContainer within sandbox \"bc316ac17c9ad051ab249ce3a5d85e73f86a1430c02185b4ba2d9c50650296b2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3fc7a267601db5a8387aa91702b0eb6d7c3b6a3212a816542cf1350008c52a82\"" Jan 20 06:47:46.277539 containerd[2553]: time="2026-01-20T06:47:46.277500318Z" level=info msg="StartContainer for \"3fc7a267601db5a8387aa91702b0eb6d7c3b6a3212a816542cf1350008c52a82\"" Jan 20 06:47:46.279459 containerd[2553]: time="2026-01-20T06:47:46.279424876Z" level=info msg="connecting to shim 3fc7a267601db5a8387aa91702b0eb6d7c3b6a3212a816542cf1350008c52a82" address="unix:///run/containerd/s/797732e50291e7de9d2693a932fe1c9cbb38bce510350ce5f75648f6fc93d645" protocol=ttrpc version=3 Jan 20 06:47:46.281515 containerd[2553]: time="2026-01-20T06:47:46.281488102Z" level=info msg="connecting to shim a791b378f2d887beb028d82d1478dde0f57ade36e93b447ff1e6fc37d94ef5e0" address="unix:///run/containerd/s/bf1350882a82792e0197171428527da96fd659f59e3f96c3e212cdf794fcec08" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:47:46.296377 systemd[1]: Started cri-containerd-3fc7a267601db5a8387aa91702b0eb6d7c3b6a3212a816542cf1350008c52a82.scope - libcontainer container 3fc7a267601db5a8387aa91702b0eb6d7c3b6a3212a816542cf1350008c52a82. Jan 20 06:47:46.305498 systemd[1]: Started cri-containerd-a791b378f2d887beb028d82d1478dde0f57ade36e93b447ff1e6fc37d94ef5e0.scope - libcontainer container a791b378f2d887beb028d82d1478dde0f57ade36e93b447ff1e6fc37d94ef5e0. Jan 20 06:47:46.313000 audit: BPF prog-id=162 op=LOAD Jan 20 06:47:46.313000 audit: BPF prog-id=163 op=LOAD Jan 20 06:47:46.313000 audit[4118]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4102 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137393162333738663264383837626562303238643832643134373864 Jan 20 06:47:46.313000 audit: BPF prog-id=163 op=UNLOAD Jan 20 06:47:46.313000 audit[4118]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4102 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137393162333738663264383837626562303238643832643134373864 Jan 20 06:47:46.314000 audit: BPF prog-id=164 op=LOAD Jan 20 06:47:46.314000 audit[4118]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4102 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137393162333738663264383837626562303238643832643134373864 Jan 20 06:47:46.314000 audit: BPF prog-id=165 op=LOAD Jan 20 06:47:46.314000 audit[4118]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4102 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137393162333738663264383837626562303238643832643134373864 Jan 20 06:47:46.314000 audit: BPF prog-id=165 op=UNLOAD Jan 20 06:47:46.314000 audit[4118]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4102 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137393162333738663264383837626562303238643832643134373864 Jan 20 06:47:46.314000 audit: BPF prog-id=164 op=UNLOAD Jan 20 06:47:46.314000 audit[4118]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4102 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137393162333738663264383837626562303238643832643134373864 Jan 20 06:47:46.314000 audit: BPF prog-id=166 op=LOAD Jan 20 06:47:46.314000 audit[4118]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4102 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137393162333738663264383837626562303238643832643134373864 Jan 20 06:47:46.334000 audit: BPF prog-id=167 op=LOAD Jan 20 06:47:46.334000 audit[4103]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=4057 pid=4103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366633761323637363031646235613833383761613931373032623065 Jan 20 06:47:46.335000 audit: BPF prog-id=168 op=LOAD Jan 20 06:47:46.335000 audit[4103]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=4057 pid=4103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.335000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366633761323637363031646235613833383761613931373032623065 Jan 20 06:47:46.335000 audit: BPF prog-id=168 op=UNLOAD Jan 20 06:47:46.335000 audit[4103]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4057 pid=4103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.335000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366633761323637363031646235613833383761613931373032623065 Jan 20 06:47:46.335000 audit: BPF prog-id=167 op=UNLOAD Jan 20 06:47:46.335000 audit[4103]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4057 pid=4103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.335000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366633761323637363031646235613833383761613931373032623065 Jan 20 06:47:46.335000 audit: BPF prog-id=169 op=LOAD Jan 20 06:47:46.335000 audit[4103]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=4057 pid=4103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.335000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366633761323637363031646235613833383761613931373032623065 Jan 20 06:47:46.356174 containerd[2553]: time="2026-01-20T06:47:46.356152732Z" level=info msg="StartContainer for \"3fc7a267601db5a8387aa91702b0eb6d7c3b6a3212a816542cf1350008c52a82\" returns successfully" Jan 20 06:47:46.367192 containerd[2553]: time="2026-01-20T06:47:46.367169967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-2kkx9,Uid:fa77bde9-32af-4e39-8a7d-b5b1fc887f04,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a791b378f2d887beb028d82d1478dde0f57ade36e93b447ff1e6fc37d94ef5e0\"" Jan 20 06:47:46.368919 containerd[2553]: time="2026-01-20T06:47:46.368900613Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 20 06:47:46.451000 audit[4198]: NETFILTER_CFG table=mangle:57 family=10 entries=1 op=nft_register_chain pid=4198 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:46.451000 audit[4198]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdeb626b60 a2=0 a3=7ffdeb626b4c items=0 ppid=4134 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.451000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 20 06:47:46.453000 audit[4201]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=4201 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:46.453000 audit[4201]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeaae3a870 a2=0 a3=7ffeaae3a85c items=0 ppid=4134 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.453000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 20 06:47:46.455000 audit[4199]: NETFILTER_CFG table=mangle:59 family=2 entries=1 op=nft_register_chain pid=4199 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:46.455000 audit[4199]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff5503a950 a2=0 a3=7fff5503a93c items=0 ppid=4134 pid=4199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.455000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 20 06:47:46.456000 audit[4203]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_chain pid=4203 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:46.456000 audit[4203]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff0917a400 a2=0 a3=7fff0917a3ec items=0 ppid=4134 pid=4203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.456000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 20 06:47:46.457000 audit[4204]: NETFILTER_CFG table=filter:61 family=10 entries=1 op=nft_register_chain pid=4204 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:46.457000 audit[4204]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff44c84a30 a2=0 a3=7fff44c84a1c items=0 ppid=4134 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.457000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 20 06:47:46.459000 audit[4205]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_chain pid=4205 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:46.459000 audit[4205]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc45cb9920 a2=0 a3=7ffc45cb990c items=0 ppid=4134 pid=4205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.459000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 20 06:47:46.554000 audit[4206]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=4206 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:46.554000 audit[4206]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffaf4424a0 a2=0 a3=7fffaf44248c items=0 ppid=4134 pid=4206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.554000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 20 06:47:46.556000 audit[4208]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=4208 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:46.556000 audit[4208]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffdf065ccb0 a2=0 a3=7ffdf065cc9c items=0 ppid=4134 pid=4208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.556000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 20 06:47:46.559000 audit[4211]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_rule pid=4211 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:46.559000 audit[4211]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffbab47250 a2=0 a3=7fffbab4723c items=0 ppid=4134 pid=4211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.559000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 20 06:47:46.560000 audit[4212]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_chain pid=4212 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:46.560000 audit[4212]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcba46c6d0 a2=0 a3=7ffcba46c6bc items=0 ppid=4134 pid=4212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.560000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 20 06:47:46.562000 audit[4214]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=4214 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:46.562000 audit[4214]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd8a5562e0 a2=0 a3=7ffd8a5562cc items=0 ppid=4134 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.562000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 20 06:47:46.563000 audit[4215]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=4215 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:46.563000 audit[4215]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff20df2e30 a2=0 a3=7fff20df2e1c items=0 ppid=4134 pid=4215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.563000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 20 06:47:46.565000 audit[4217]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=4217 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:46.565000 audit[4217]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffa42e6ef0 a2=0 a3=7fffa42e6edc items=0 ppid=4134 pid=4217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.565000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 20 06:47:46.568000 audit[4220]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_rule pid=4220 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:46.568000 audit[4220]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffeefdf4b60 a2=0 a3=7ffeefdf4b4c items=0 ppid=4134 pid=4220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.568000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 20 06:47:46.569000 audit[4221]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_chain pid=4221 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:46.569000 audit[4221]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe3f539100 a2=0 a3=7ffe3f5390ec items=0 ppid=4134 pid=4221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.569000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 20 06:47:46.571000 audit[4223]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=4223 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:46.571000 audit[4223]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffbce2e1c0 a2=0 a3=7fffbce2e1ac items=0 ppid=4134 pid=4223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.571000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 20 06:47:46.572000 audit[4224]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=4224 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:46.572000 audit[4224]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd9654dd90 a2=0 a3=7ffd9654dd7c items=0 ppid=4134 pid=4224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.572000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 20 06:47:46.574000 audit[4226]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=4226 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:46.574000 audit[4226]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe7cb6ac40 a2=0 a3=7ffe7cb6ac2c items=0 ppid=4134 pid=4226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.574000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 20 06:47:46.578000 audit[4229]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_rule pid=4229 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:46.578000 audit[4229]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe722a4730 a2=0 a3=7ffe722a471c items=0 ppid=4134 pid=4229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.578000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 20 06:47:46.582000 audit[4232]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=4232 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:46.582000 audit[4232]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff7a9a1870 a2=0 a3=7fff7a9a185c items=0 ppid=4134 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.582000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 20 06:47:46.583000 audit[4233]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=4233 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:46.583000 audit[4233]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd9209b800 a2=0 a3=7ffd9209b7ec items=0 ppid=4134 pid=4233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.583000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 20 06:47:46.585000 audit[4235]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=4235 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:46.585000 audit[4235]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe7752d910 a2=0 a3=7ffe7752d8fc items=0 ppid=4134 pid=4235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.585000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 06:47:46.589000 audit[4238]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_rule pid=4238 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:46.589000 audit[4238]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffee2103ab0 a2=0 a3=7ffee2103a9c items=0 ppid=4134 pid=4238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.589000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 06:47:46.590000 audit[4239]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_chain pid=4239 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:46.590000 audit[4239]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffff4761de0 a2=0 a3=7ffff4761dcc items=0 ppid=4134 pid=4239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.590000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 20 06:47:46.592000 audit[4241]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=4241 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 06:47:46.592000 audit[4241]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffdf25d0710 a2=0 a3=7ffdf25d06fc items=0 ppid=4134 pid=4241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.592000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 20 06:47:46.651000 audit[4247]: NETFILTER_CFG table=filter:82 family=2 entries=8 op=nft_register_rule pid=4247 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:47:46.651000 audit[4247]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc3909acd0 a2=0 a3=7ffc3909acbc items=0 ppid=4134 pid=4247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.651000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:47:46.674000 audit[4247]: NETFILTER_CFG table=nat:83 family=2 entries=14 op=nft_register_chain pid=4247 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:47:46.674000 audit[4247]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffc3909acd0 a2=0 a3=7ffc3909acbc items=0 ppid=4134 pid=4247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.674000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:47:46.675000 audit[4252]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=4252 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:46.675000 audit[4252]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe92017170 a2=0 a3=7ffe9201715c items=0 ppid=4134 pid=4252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.675000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 20 06:47:46.677000 audit[4254]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=4254 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:46.677000 audit[4254]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe7fca9670 a2=0 a3=7ffe7fca965c items=0 ppid=4134 pid=4254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.677000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 20 06:47:46.681000 audit[4257]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=4257 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:46.681000 audit[4257]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffdfbbd1a90 a2=0 a3=7ffdfbbd1a7c items=0 ppid=4134 pid=4257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.681000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 20 06:47:46.682000 audit[4258]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=4258 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:46.682000 audit[4258]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffee2be70f0 a2=0 a3=7ffee2be70dc items=0 ppid=4134 pid=4258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.682000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 20 06:47:46.684000 audit[4260]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=4260 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:46.684000 audit[4260]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffeea7846d0 a2=0 a3=7ffeea7846bc items=0 ppid=4134 pid=4260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.684000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 20 06:47:46.685000 audit[4261]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=4261 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:46.685000 audit[4261]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd2d185070 a2=0 a3=7ffd2d18505c items=0 ppid=4134 pid=4261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.685000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 20 06:47:46.687000 audit[4263]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=4263 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:46.687000 audit[4263]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc5d859240 a2=0 a3=7ffc5d85922c items=0 ppid=4134 pid=4263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.687000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 20 06:47:46.690000 audit[4266]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=4266 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:46.690000 audit[4266]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffdf9c15440 a2=0 a3=7ffdf9c1542c items=0 ppid=4134 pid=4266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.690000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 20 06:47:46.691000 audit[4267]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=4267 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:46.691000 audit[4267]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe77b3bb00 a2=0 a3=7ffe77b3baec items=0 ppid=4134 pid=4267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.691000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 20 06:47:46.693000 audit[4269]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=4269 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:46.693000 audit[4269]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff52f9f310 a2=0 a3=7fff52f9f2fc items=0 ppid=4134 pid=4269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.693000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 20 06:47:46.694000 audit[4270]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=4270 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:46.694000 audit[4270]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdff519b70 a2=0 a3=7ffdff519b5c items=0 ppid=4134 pid=4270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.694000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 20 06:47:46.696000 audit[4272]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=4272 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:46.696000 audit[4272]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdac0882d0 a2=0 a3=7ffdac0882bc items=0 ppid=4134 pid=4272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.696000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 20 06:47:46.699000 audit[4275]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=4275 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:46.699000 audit[4275]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe32826c80 a2=0 a3=7ffe32826c6c items=0 ppid=4134 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.699000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 20 06:47:46.702000 audit[4278]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=4278 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:46.702000 audit[4278]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe76d37a90 a2=0 a3=7ffe76d37a7c items=0 ppid=4134 pid=4278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.702000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 20 06:47:46.703000 audit[4279]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=4279 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:46.703000 audit[4279]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd02d44580 a2=0 a3=7ffd02d4456c items=0 ppid=4134 pid=4279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.703000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 20 06:47:46.705000 audit[4281]: NETFILTER_CFG table=nat:99 family=10 entries=1 op=nft_register_rule pid=4281 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:46.705000 audit[4281]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffcfa16f210 a2=0 a3=7ffcfa16f1fc items=0 ppid=4134 pid=4281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.705000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 06:47:46.708000 audit[4284]: NETFILTER_CFG table=nat:100 family=10 entries=1 op=nft_register_rule pid=4284 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:46.708000 audit[4284]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd1aadb0a0 a2=0 a3=7ffd1aadb08c items=0 ppid=4134 pid=4284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.708000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 06:47:46.709000 audit[4285]: NETFILTER_CFG table=nat:101 family=10 entries=1 op=nft_register_chain pid=4285 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:46.709000 audit[4285]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc4a3b21d0 a2=0 a3=7ffc4a3b21bc items=0 ppid=4134 pid=4285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.709000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 20 06:47:46.711000 audit[4287]: NETFILTER_CFG table=nat:102 family=10 entries=2 op=nft_register_chain pid=4287 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:46.711000 audit[4287]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffe7ee9b4a0 a2=0 a3=7ffe7ee9b48c items=0 ppid=4134 pid=4287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.711000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 20 06:47:46.712000 audit[4288]: NETFILTER_CFG table=filter:103 family=10 entries=1 op=nft_register_chain pid=4288 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:46.712000 audit[4288]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe9100f2b0 a2=0 a3=7ffe9100f29c items=0 ppid=4134 pid=4288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.712000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 20 06:47:46.713000 audit[4290]: NETFILTER_CFG table=filter:104 family=10 entries=1 op=nft_register_rule pid=4290 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:46.713000 audit[4290]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdef0a9d20 a2=0 a3=7ffdef0a9d0c items=0 ppid=4134 pid=4290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.713000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 06:47:46.716000 audit[4293]: NETFILTER_CFG table=filter:105 family=10 entries=1 op=nft_register_rule pid=4293 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 06:47:46.716000 audit[4293]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffeb34f21f0 a2=0 a3=7ffeb34f21dc items=0 ppid=4134 pid=4293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.716000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 06:47:46.718000 audit[4295]: NETFILTER_CFG table=filter:106 family=10 entries=3 op=nft_register_rule pid=4295 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 20 06:47:46.718000 audit[4295]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fffde7c2ec0 a2=0 a3=7fffde7c2eac items=0 ppid=4134 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.718000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:47:46.719000 audit[4295]: NETFILTER_CFG table=nat:107 family=10 entries=7 op=nft_register_chain pid=4295 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 20 06:47:46.719000 audit[4295]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fffde7c2ec0 a2=0 a3=7fffde7c2eac items=0 ppid=4134 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:46.719000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:47:46.899749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2860517620.mount: Deactivated successfully. Jan 20 06:47:49.001399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4288428627.mount: Deactivated successfully. Jan 20 06:47:49.532265 containerd[2553]: time="2026-01-20T06:47:49.532235449Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:47:49.534776 containerd[2553]: time="2026-01-20T06:47:49.534754577Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 20 06:47:49.537534 containerd[2553]: time="2026-01-20T06:47:49.537471343Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:47:49.540546 containerd[2553]: time="2026-01-20T06:47:49.540489813Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:47:49.540883 containerd[2553]: time="2026-01-20T06:47:49.540797327Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.171784628s" Jan 20 06:47:49.540883 containerd[2553]: time="2026-01-20T06:47:49.540818480Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 20 06:47:49.542698 containerd[2553]: time="2026-01-20T06:47:49.542410237Z" level=info msg="CreateContainer within sandbox \"a791b378f2d887beb028d82d1478dde0f57ade36e93b447ff1e6fc37d94ef5e0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 20 06:47:49.565296 containerd[2553]: time="2026-01-20T06:47:49.565265424Z" level=info msg="Container d27f7f8b7cbf2c304a1f1353fbe4209de2e2669e1d749266e3ee968b4ccfd3f8: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:47:49.581240 containerd[2553]: time="2026-01-20T06:47:49.580563669Z" level=info msg="CreateContainer within sandbox \"a791b378f2d887beb028d82d1478dde0f57ade36e93b447ff1e6fc37d94ef5e0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d27f7f8b7cbf2c304a1f1353fbe4209de2e2669e1d749266e3ee968b4ccfd3f8\"" Jan 20 06:47:49.581826 containerd[2553]: time="2026-01-20T06:47:49.581796065Z" level=info msg="StartContainer for \"d27f7f8b7cbf2c304a1f1353fbe4209de2e2669e1d749266e3ee968b4ccfd3f8\"" Jan 20 06:47:49.582923 containerd[2553]: time="2026-01-20T06:47:49.582903452Z" level=info msg="connecting to shim d27f7f8b7cbf2c304a1f1353fbe4209de2e2669e1d749266e3ee968b4ccfd3f8" address="unix:///run/containerd/s/bf1350882a82792e0197171428527da96fd659f59e3f96c3e212cdf794fcec08" protocol=ttrpc version=3 Jan 20 06:47:49.604391 systemd[1]: Started cri-containerd-d27f7f8b7cbf2c304a1f1353fbe4209de2e2669e1d749266e3ee968b4ccfd3f8.scope - libcontainer container d27f7f8b7cbf2c304a1f1353fbe4209de2e2669e1d749266e3ee968b4ccfd3f8. Jan 20 06:47:49.611000 audit: BPF prog-id=170 op=LOAD Jan 20 06:47:49.611000 audit: BPF prog-id=171 op=LOAD Jan 20 06:47:49.611000 audit[4306]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00019c238 a2=98 a3=0 items=0 ppid=4102 pid=4306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:49.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432376637663862376362663263333034613166313335336662653432 Jan 20 06:47:49.611000 audit: BPF prog-id=171 op=UNLOAD Jan 20 06:47:49.611000 audit[4306]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4102 pid=4306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:49.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432376637663862376362663263333034613166313335336662653432 Jan 20 06:47:49.611000 audit: BPF prog-id=172 op=LOAD Jan 20 06:47:49.611000 audit[4306]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00019c488 a2=98 a3=0 items=0 ppid=4102 pid=4306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:49.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432376637663862376362663263333034613166313335336662653432 Jan 20 06:47:49.611000 audit: BPF prog-id=173 op=LOAD Jan 20 06:47:49.611000 audit[4306]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00019c218 a2=98 a3=0 items=0 ppid=4102 pid=4306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:49.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432376637663862376362663263333034613166313335336662653432 Jan 20 06:47:49.611000 audit: BPF prog-id=173 op=UNLOAD Jan 20 06:47:49.611000 audit[4306]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4102 pid=4306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:49.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432376637663862376362663263333034613166313335336662653432 Jan 20 06:47:49.611000 audit: BPF prog-id=172 op=UNLOAD Jan 20 06:47:49.611000 audit[4306]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4102 pid=4306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:49.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432376637663862376362663263333034613166313335336662653432 Jan 20 06:47:49.611000 audit: BPF prog-id=174 op=LOAD Jan 20 06:47:49.611000 audit[4306]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00019c6e8 a2=98 a3=0 items=0 ppid=4102 pid=4306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:49.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432376637663862376362663263333034613166313335336662653432 Jan 20 06:47:49.627148 containerd[2553]: time="2026-01-20T06:47:49.627114770Z" level=info msg="StartContainer for \"d27f7f8b7cbf2c304a1f1353fbe4209de2e2669e1d749266e3ee968b4ccfd3f8\" returns successfully" Jan 20 06:47:50.520004 kubelet[4005]: I0120 06:47:50.519900 4005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-d4vg7" podStartSLOduration=5.519884355 podStartE2EDuration="5.519884355s" podCreationTimestamp="2026-01-20 06:47:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 06:47:46.609805506 +0000 UTC m=+6.139625897" watchObservedRunningTime="2026-01-20 06:47:50.519884355 +0000 UTC m=+10.049704754" Jan 20 06:47:50.850558 kubelet[4005]: I0120 06:47:50.850338 4005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-2kkx9" podStartSLOduration=2.677051 podStartE2EDuration="5.850324505s" podCreationTimestamp="2026-01-20 06:47:45 +0000 UTC" firstStartedPulling="2026-01-20 06:47:46.36811737 +0000 UTC m=+5.897937771" lastFinishedPulling="2026-01-20 06:47:49.541390879 +0000 UTC m=+9.071211276" observedRunningTime="2026-01-20 06:47:50.616297716 +0000 UTC m=+10.146118111" watchObservedRunningTime="2026-01-20 06:47:50.850324505 +0000 UTC m=+10.380144904" Jan 20 06:47:54.803761 sudo[3025]: pam_unix(sudo:session): session closed for user root Jan 20 06:47:54.809241 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 20 06:47:54.809325 kernel: audit: type=1106 audit(1768891674.803:546): pid=3025 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 06:47:54.803000 audit[3025]: USER_END pid=3025 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 06:47:54.808000 audit[3025]: CRED_DISP pid=3025 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 06:47:54.814395 kernel: audit: type=1104 audit(1768891674.808:547): pid=3025 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 06:47:54.910006 sshd[3024]: Connection closed by 10.200.16.10 port 40526 Jan 20 06:47:54.910392 sshd-session[3020]: pam_unix(sshd:session): session closed for user core Jan 20 06:47:54.918245 kernel: audit: type=1106 audit(1768891674.911:548): pid=3020 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:47:54.911000 audit[3020]: USER_END pid=3020 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:47:54.918802 systemd[1]: sshd@6-10.200.8.22:22-10.200.16.10:40526.service: Deactivated successfully. Jan 20 06:47:54.911000 audit[3020]: CRED_DISP pid=3020 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:47:54.923257 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 06:47:54.923566 systemd[1]: session-10.scope: Consumed 2.818s CPU time, 224.6M memory peak. Jan 20 06:47:54.924264 kernel: audit: type=1104 audit(1768891674.911:549): pid=3020 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:47:54.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.22:22-10.200.16.10:40526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:54.929227 kernel: audit: type=1131 audit(1768891674.918:550): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.8.22:22-10.200.16.10:40526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:47:54.928908 systemd-logind[2525]: Session 10 logged out. Waiting for processes to exit. Jan 20 06:47:54.929596 systemd-logind[2525]: Removed session 10. Jan 20 06:47:55.366000 audit[4388]: NETFILTER_CFG table=filter:108 family=2 entries=15 op=nft_register_rule pid=4388 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:47:55.370488 kernel: audit: type=1325 audit(1768891675.366:551): table=filter:108 family=2 entries=15 op=nft_register_rule pid=4388 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:47:55.366000 audit[4388]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffed5de2a20 a2=0 a3=7ffed5de2a0c items=0 ppid=4134 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:55.376237 kernel: audit: type=1300 audit(1768891675.366:551): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffed5de2a20 a2=0 a3=7ffed5de2a0c items=0 ppid=4134 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:55.366000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:47:55.381229 kernel: audit: type=1327 audit(1768891675.366:551): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:47:55.377000 audit[4388]: NETFILTER_CFG table=nat:109 family=2 entries=12 op=nft_register_rule pid=4388 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:47:55.385270 kernel: audit: type=1325 audit(1768891675.377:552): table=nat:109 family=2 entries=12 op=nft_register_rule pid=4388 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:47:55.377000 audit[4388]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffed5de2a20 a2=0 a3=0 items=0 ppid=4134 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:55.392234 kernel: audit: type=1300 audit(1768891675.377:552): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffed5de2a20 a2=0 a3=0 items=0 ppid=4134 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:55.377000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:47:55.424000 audit[4390]: NETFILTER_CFG table=filter:110 family=2 entries=16 op=nft_register_rule pid=4390 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:47:55.424000 audit[4390]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdd3f94c60 a2=0 a3=7ffdd3f94c4c items=0 ppid=4134 pid=4390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:55.424000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:47:55.428000 audit[4390]: NETFILTER_CFG table=nat:111 family=2 entries=12 op=nft_register_rule pid=4390 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:47:55.428000 audit[4390]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdd3f94c60 a2=0 a3=0 items=0 ppid=4134 pid=4390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:55.428000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:47:57.351000 audit[4392]: NETFILTER_CFG table=filter:112 family=2 entries=17 op=nft_register_rule pid=4392 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:47:57.351000 audit[4392]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe3aeca680 a2=0 a3=7ffe3aeca66c items=0 ppid=4134 pid=4392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:57.351000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:47:57.355000 audit[4392]: NETFILTER_CFG table=nat:113 family=2 entries=12 op=nft_register_rule pid=4392 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:47:57.355000 audit[4392]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe3aeca680 a2=0 a3=0 items=0 ppid=4134 pid=4392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:57.355000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:47:57.372000 audit[4394]: NETFILTER_CFG table=filter:114 family=2 entries=18 op=nft_register_rule pid=4394 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:47:57.372000 audit[4394]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff120babf0 a2=0 a3=7fff120babdc items=0 ppid=4134 pid=4394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:57.372000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:47:57.376000 audit[4394]: NETFILTER_CFG table=nat:115 family=2 entries=12 op=nft_register_rule pid=4394 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:47:57.376000 audit[4394]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff120babf0 a2=0 a3=0 items=0 ppid=4134 pid=4394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:57.376000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:47:58.388000 audit[4396]: NETFILTER_CFG table=filter:116 family=2 entries=19 op=nft_register_rule pid=4396 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:47:58.388000 audit[4396]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff30184ef0 a2=0 a3=7fff30184edc items=0 ppid=4134 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:58.388000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:47:58.391000 audit[4396]: NETFILTER_CFG table=nat:117 family=2 entries=12 op=nft_register_rule pid=4396 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:47:58.391000 audit[4396]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff30184ef0 a2=0 a3=0 items=0 ppid=4134 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:58.391000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:47:58.974047 systemd[1]: Created slice kubepods-besteffort-pod908a2b21_9cb5_4731_b2b8_e36673bf3665.slice - libcontainer container kubepods-besteffort-pod908a2b21_9cb5_4731_b2b8_e36673bf3665.slice. Jan 20 06:47:59.066173 kubelet[4005]: I0120 06:47:59.066141 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/908a2b21-9cb5-4731-b2b8-e36673bf3665-tigera-ca-bundle\") pod \"calico-typha-57787f6dff-7mrtl\" (UID: \"908a2b21-9cb5-4731-b2b8-e36673bf3665\") " pod="calico-system/calico-typha-57787f6dff-7mrtl" Jan 20 06:47:59.066173 kubelet[4005]: I0120 06:47:59.066172 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/908a2b21-9cb5-4731-b2b8-e36673bf3665-typha-certs\") pod \"calico-typha-57787f6dff-7mrtl\" (UID: \"908a2b21-9cb5-4731-b2b8-e36673bf3665\") " pod="calico-system/calico-typha-57787f6dff-7mrtl" Jan 20 06:47:59.066515 kubelet[4005]: I0120 06:47:59.066188 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km5p2\" (UniqueName: \"kubernetes.io/projected/908a2b21-9cb5-4731-b2b8-e36673bf3665-kube-api-access-km5p2\") pod \"calico-typha-57787f6dff-7mrtl\" (UID: \"908a2b21-9cb5-4731-b2b8-e36673bf3665\") " pod="calico-system/calico-typha-57787f6dff-7mrtl" Jan 20 06:47:59.156081 systemd[1]: Created slice kubepods-besteffort-pod2672f06c_6b2b_42d6_a05f_730e21e04285.slice - libcontainer container kubepods-besteffort-pod2672f06c_6b2b_42d6_a05f_730e21e04285.slice. Jan 20 06:47:59.166674 kubelet[4005]: I0120 06:47:59.166646 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2672f06c-6b2b-42d6-a05f-730e21e04285-var-run-calico\") pod \"calico-node-xvzx8\" (UID: \"2672f06c-6b2b-42d6-a05f-730e21e04285\") " pod="calico-system/calico-node-xvzx8" Jan 20 06:47:59.166760 kubelet[4005]: I0120 06:47:59.166721 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2672f06c-6b2b-42d6-a05f-730e21e04285-lib-modules\") pod \"calico-node-xvzx8\" (UID: \"2672f06c-6b2b-42d6-a05f-730e21e04285\") " pod="calico-system/calico-node-xvzx8" Jan 20 06:47:59.166760 kubelet[4005]: I0120 06:47:59.166736 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2672f06c-6b2b-42d6-a05f-730e21e04285-cni-bin-dir\") pod \"calico-node-xvzx8\" (UID: \"2672f06c-6b2b-42d6-a05f-730e21e04285\") " pod="calico-system/calico-node-xvzx8" Jan 20 06:47:59.166760 kubelet[4005]: I0120 06:47:59.166752 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2672f06c-6b2b-42d6-a05f-730e21e04285-cni-log-dir\") pod \"calico-node-xvzx8\" (UID: \"2672f06c-6b2b-42d6-a05f-730e21e04285\") " pod="calico-system/calico-node-xvzx8" Jan 20 06:47:59.166824 kubelet[4005]: I0120 06:47:59.166768 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2672f06c-6b2b-42d6-a05f-730e21e04285-tigera-ca-bundle\") pod \"calico-node-xvzx8\" (UID: \"2672f06c-6b2b-42d6-a05f-730e21e04285\") " pod="calico-system/calico-node-xvzx8" Jan 20 06:47:59.166824 kubelet[4005]: I0120 06:47:59.166787 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2672f06c-6b2b-42d6-a05f-730e21e04285-node-certs\") pod \"calico-node-xvzx8\" (UID: \"2672f06c-6b2b-42d6-a05f-730e21e04285\") " pod="calico-system/calico-node-xvzx8" Jan 20 06:47:59.166824 kubelet[4005]: I0120 06:47:59.166802 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2672f06c-6b2b-42d6-a05f-730e21e04285-policysync\") pod \"calico-node-xvzx8\" (UID: \"2672f06c-6b2b-42d6-a05f-730e21e04285\") " pod="calico-system/calico-node-xvzx8" Jan 20 06:47:59.166824 kubelet[4005]: I0120 06:47:59.166818 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2672f06c-6b2b-42d6-a05f-730e21e04285-flexvol-driver-host\") pod \"calico-node-xvzx8\" (UID: \"2672f06c-6b2b-42d6-a05f-730e21e04285\") " pod="calico-system/calico-node-xvzx8" Jan 20 06:47:59.166902 kubelet[4005]: I0120 06:47:59.166833 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2672f06c-6b2b-42d6-a05f-730e21e04285-xtables-lock\") pod \"calico-node-xvzx8\" (UID: \"2672f06c-6b2b-42d6-a05f-730e21e04285\") " pod="calico-system/calico-node-xvzx8" Jan 20 06:47:59.166902 kubelet[4005]: I0120 06:47:59.166847 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2672f06c-6b2b-42d6-a05f-730e21e04285-cni-net-dir\") pod \"calico-node-xvzx8\" (UID: \"2672f06c-6b2b-42d6-a05f-730e21e04285\") " pod="calico-system/calico-node-xvzx8" Jan 20 06:47:59.166902 kubelet[4005]: I0120 06:47:59.166862 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2672f06c-6b2b-42d6-a05f-730e21e04285-var-lib-calico\") pod \"calico-node-xvzx8\" (UID: \"2672f06c-6b2b-42d6-a05f-730e21e04285\") " pod="calico-system/calico-node-xvzx8" Jan 20 06:47:59.166902 kubelet[4005]: I0120 06:47:59.166877 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvkwf\" (UniqueName: \"kubernetes.io/projected/2672f06c-6b2b-42d6-a05f-730e21e04285-kube-api-access-jvkwf\") pod \"calico-node-xvzx8\" (UID: \"2672f06c-6b2b-42d6-a05f-730e21e04285\") " pod="calico-system/calico-node-xvzx8" Jan 20 06:47:59.269442 kubelet[4005]: E0120 06:47:59.269325 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.269442 kubelet[4005]: W0120 06:47:59.269343 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.269442 kubelet[4005]: E0120 06:47:59.269366 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.269559 kubelet[4005]: E0120 06:47:59.269474 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.269559 kubelet[4005]: W0120 06:47:59.269479 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.269559 kubelet[4005]: E0120 06:47:59.269486 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.270408 kubelet[4005]: E0120 06:47:59.269816 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.270408 kubelet[4005]: W0120 06:47:59.269827 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.270408 kubelet[4005]: E0120 06:47:59.269837 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.270408 kubelet[4005]: E0120 06:47:59.269997 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.270408 kubelet[4005]: W0120 06:47:59.270005 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.270408 kubelet[4005]: E0120 06:47:59.270012 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.270408 kubelet[4005]: E0120 06:47:59.270138 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.270408 kubelet[4005]: W0120 06:47:59.270143 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.270408 kubelet[4005]: E0120 06:47:59.270152 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.270897 kubelet[4005]: E0120 06:47:59.270523 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.270897 kubelet[4005]: W0120 06:47:59.270533 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.270897 kubelet[4005]: E0120 06:47:59.270611 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.270897 kubelet[4005]: E0120 06:47:59.270764 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.270897 kubelet[4005]: W0120 06:47:59.270770 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.270897 kubelet[4005]: E0120 06:47:59.270802 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.276229 kubelet[4005]: E0120 06:47:59.272323 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.276229 kubelet[4005]: W0120 06:47:59.272339 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.276229 kubelet[4005]: E0120 06:47:59.272362 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.276229 kubelet[4005]: E0120 06:47:59.272499 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.276229 kubelet[4005]: W0120 06:47:59.272505 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.276229 kubelet[4005]: E0120 06:47:59.272587 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.276229 kubelet[4005]: E0120 06:47:59.272624 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.276229 kubelet[4005]: W0120 06:47:59.272629 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.276229 kubelet[4005]: E0120 06:47:59.272643 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.276229 kubelet[4005]: E0120 06:47:59.272783 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.276477 kubelet[4005]: W0120 06:47:59.272788 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.276477 kubelet[4005]: E0120 06:47:59.272801 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.276477 kubelet[4005]: E0120 06:47:59.272927 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.276477 kubelet[4005]: W0120 06:47:59.272933 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.276477 kubelet[4005]: E0120 06:47:59.272940 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.276477 kubelet[4005]: E0120 06:47:59.275788 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.276477 kubelet[4005]: W0120 06:47:59.275798 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.276477 kubelet[4005]: E0120 06:47:59.275809 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.279352 kubelet[4005]: E0120 06:47:59.279302 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.279352 kubelet[4005]: W0120 06:47:59.279316 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.279352 kubelet[4005]: E0120 06:47:59.279328 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.282192 containerd[2553]: time="2026-01-20T06:47:59.282147815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57787f6dff-7mrtl,Uid:908a2b21-9cb5-4731-b2b8-e36673bf3665,Namespace:calico-system,Attempt:0,}" Jan 20 06:47:59.324009 containerd[2553]: time="2026-01-20T06:47:59.323894174Z" level=info msg="connecting to shim b8ddded6fe9a4b270ff906d55af0d476b2e69cdd4cc3ee036f86521b55e1e321" address="unix:///run/containerd/s/4eafbadce3e6664d69102d926ff58d75ecc12df9a72146780f0d0e72994a52fd" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:47:59.351423 systemd[1]: Started cri-containerd-b8ddded6fe9a4b270ff906d55af0d476b2e69cdd4cc3ee036f86521b55e1e321.scope - libcontainer container b8ddded6fe9a4b270ff906d55af0d476b2e69cdd4cc3ee036f86521b55e1e321. Jan 20 06:47:59.355956 kubelet[4005]: E0120 06:47:59.355930 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r95bt" podUID="feca3a47-a9f0-4272-a08e-b4b137171f9f" Jan 20 06:47:59.360223 kubelet[4005]: E0120 06:47:59.360193 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.360327 kubelet[4005]: W0120 06:47:59.360313 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.360358 kubelet[4005]: E0120 06:47:59.360332 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.360611 kubelet[4005]: E0120 06:47:59.360597 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.360611 kubelet[4005]: W0120 06:47:59.360611 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.360672 kubelet[4005]: E0120 06:47:59.360622 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.361033 kubelet[4005]: E0120 06:47:59.360896 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.361033 kubelet[4005]: W0120 06:47:59.360903 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.361033 kubelet[4005]: E0120 06:47:59.360913 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.361238 kubelet[4005]: E0120 06:47:59.361223 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.361238 kubelet[4005]: W0120 06:47:59.361231 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.361303 kubelet[4005]: E0120 06:47:59.361241 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.361586 kubelet[4005]: E0120 06:47:59.361545 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.361586 kubelet[4005]: W0120 06:47:59.361558 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.361586 kubelet[4005]: E0120 06:47:59.361568 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.362013 kubelet[4005]: E0120 06:47:59.361998 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.362013 kubelet[4005]: W0120 06:47:59.362012 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.362091 kubelet[4005]: E0120 06:47:59.362024 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.362517 kubelet[4005]: E0120 06:47:59.362496 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.362517 kubelet[4005]: W0120 06:47:59.362515 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.362591 kubelet[4005]: E0120 06:47:59.362527 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.363038 kubelet[4005]: E0120 06:47:59.363023 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.363038 kubelet[4005]: W0120 06:47:59.363037 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.363119 kubelet[4005]: E0120 06:47:59.363048 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.363206 kubelet[4005]: E0120 06:47:59.363196 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.363243 kubelet[4005]: W0120 06:47:59.363206 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.363243 kubelet[4005]: E0120 06:47:59.363233 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.363539 kubelet[4005]: E0120 06:47:59.363487 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.363539 kubelet[4005]: W0120 06:47:59.363497 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.363539 kubelet[4005]: E0120 06:47:59.363509 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.363870 kubelet[4005]: E0120 06:47:59.363839 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.364193 kubelet[4005]: W0120 06:47:59.364108 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.364193 kubelet[4005]: E0120 06:47:59.364129 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.365574 kubelet[4005]: E0120 06:47:59.365553 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.365574 kubelet[4005]: W0120 06:47:59.365572 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.365663 kubelet[4005]: E0120 06:47:59.365584 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.366320 kubelet[4005]: E0120 06:47:59.366305 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.366320 kubelet[4005]: W0120 06:47:59.366320 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.366489 kubelet[4005]: E0120 06:47:59.366332 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.366669 kubelet[4005]: E0120 06:47:59.366605 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.366669 kubelet[4005]: W0120 06:47:59.366615 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.366669 kubelet[4005]: E0120 06:47:59.366626 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.366920 kubelet[4005]: E0120 06:47:59.366904 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.366920 kubelet[4005]: W0120 06:47:59.366914 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.366977 kubelet[4005]: E0120 06:47:59.366925 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.366000 audit: BPF prog-id=175 op=LOAD Jan 20 06:47:59.368298 kubelet[4005]: E0120 06:47:59.367168 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.368298 kubelet[4005]: W0120 06:47:59.367177 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.368298 kubelet[4005]: E0120 06:47:59.367187 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.368484 kubelet[4005]: E0120 06:47:59.368410 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.368484 kubelet[4005]: W0120 06:47:59.368421 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.368549 kubelet[4005]: E0120 06:47:59.368522 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.367000 audit: BPF prog-id=176 op=LOAD Jan 20 06:47:59.367000 audit[4435]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4424 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:59.367000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238646464656436666539613462323730666639303664353561663064 Jan 20 06:47:59.367000 audit: BPF prog-id=176 op=UNLOAD Jan 20 06:47:59.367000 audit[4435]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4424 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:59.367000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238646464656436666539613462323730666639303664353561663064 Jan 20 06:47:59.369098 kubelet[4005]: E0120 06:47:59.369064 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.369098 kubelet[4005]: W0120 06:47:59.369077 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.369098 kubelet[4005]: E0120 06:47:59.369089 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.367000 audit: BPF prog-id=177 op=LOAD Jan 20 06:47:59.367000 audit[4435]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4424 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:59.367000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238646464656436666539613462323730666639303664353561663064 Jan 20 06:47:59.367000 audit: BPF prog-id=178 op=LOAD Jan 20 06:47:59.367000 audit[4435]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4424 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:59.367000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238646464656436666539613462323730666639303664353561663064 Jan 20 06:47:59.368000 audit: BPF prog-id=178 op=UNLOAD Jan 20 06:47:59.368000 audit[4435]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4424 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:59.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238646464656436666539613462323730666639303664353561663064 Jan 20 06:47:59.368000 audit: BPF prog-id=177 op=UNLOAD Jan 20 06:47:59.368000 audit[4435]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4424 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:59.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238646464656436666539613462323730666639303664353561663064 Jan 20 06:47:59.369982 kubelet[4005]: E0120 06:47:59.369624 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.369982 kubelet[4005]: W0120 06:47:59.369634 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.369982 kubelet[4005]: E0120 06:47:59.369645 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.368000 audit: BPF prog-id=179 op=LOAD Jan 20 06:47:59.368000 audit[4435]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4424 pid=4435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:59.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6238646464656436666539613462323730666639303664353561663064 Jan 20 06:47:59.370546 kubelet[4005]: E0120 06:47:59.370093 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.370546 kubelet[4005]: W0120 06:47:59.370107 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.370546 kubelet[4005]: E0120 06:47:59.370117 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.370664 kubelet[4005]: E0120 06:47:59.370642 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.370664 kubelet[4005]: W0120 06:47:59.370651 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.370664 kubelet[4005]: E0120 06:47:59.370662 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.370833 kubelet[4005]: I0120 06:47:59.370681 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/feca3a47-a9f0-4272-a08e-b4b137171f9f-registration-dir\") pod \"csi-node-driver-r95bt\" (UID: \"feca3a47-a9f0-4272-a08e-b4b137171f9f\") " pod="calico-system/csi-node-driver-r95bt" Jan 20 06:47:59.370994 kubelet[4005]: E0120 06:47:59.370982 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.370994 kubelet[4005]: W0120 06:47:59.370993 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.371102 kubelet[4005]: E0120 06:47:59.371090 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.371141 kubelet[4005]: I0120 06:47:59.371112 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/feca3a47-a9f0-4272-a08e-b4b137171f9f-socket-dir\") pod \"csi-node-driver-r95bt\" (UID: \"feca3a47-a9f0-4272-a08e-b4b137171f9f\") " pod="calico-system/csi-node-driver-r95bt" Jan 20 06:47:59.371420 kubelet[4005]: E0120 06:47:59.371408 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.371420 kubelet[4005]: W0120 06:47:59.371419 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.371528 kubelet[4005]: E0120 06:47:59.371517 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.371554 kubelet[4005]: I0120 06:47:59.371540 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/feca3a47-a9f0-4272-a08e-b4b137171f9f-kubelet-dir\") pod \"csi-node-driver-r95bt\" (UID: \"feca3a47-a9f0-4272-a08e-b4b137171f9f\") " pod="calico-system/csi-node-driver-r95bt" Jan 20 06:47:59.372229 kubelet[4005]: E0120 06:47:59.372196 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.372296 kubelet[4005]: W0120 06:47:59.372241 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.372296 kubelet[4005]: E0120 06:47:59.372261 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.372296 kubelet[4005]: I0120 06:47:59.372278 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppfnq\" (UniqueName: \"kubernetes.io/projected/feca3a47-a9f0-4272-a08e-b4b137171f9f-kube-api-access-ppfnq\") pod \"csi-node-driver-r95bt\" (UID: \"feca3a47-a9f0-4272-a08e-b4b137171f9f\") " pod="calico-system/csi-node-driver-r95bt" Jan 20 06:47:59.373011 kubelet[4005]: E0120 06:47:59.372994 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.373011 kubelet[4005]: W0120 06:47:59.373010 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.373183 kubelet[4005]: E0120 06:47:59.373060 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.373183 kubelet[4005]: I0120 06:47:59.373082 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/feca3a47-a9f0-4272-a08e-b4b137171f9f-varrun\") pod \"csi-node-driver-r95bt\" (UID: \"feca3a47-a9f0-4272-a08e-b4b137171f9f\") " pod="calico-system/csi-node-driver-r95bt" Jan 20 06:47:59.373508 kubelet[4005]: E0120 06:47:59.373438 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.373567 kubelet[4005]: W0120 06:47:59.373557 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.373627 kubelet[4005]: E0120 06:47:59.373611 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.373918 kubelet[4005]: E0120 06:47:59.373909 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.373972 kubelet[4005]: W0120 06:47:59.373965 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.374044 kubelet[4005]: E0120 06:47:59.374035 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.374164 kubelet[4005]: E0120 06:47:59.374158 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.374256 kubelet[4005]: W0120 06:47:59.374194 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.374352 kubelet[4005]: E0120 06:47:59.374287 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.374418 kubelet[4005]: E0120 06:47:59.374413 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.374466 kubelet[4005]: W0120 06:47:59.374450 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.374579 kubelet[4005]: E0120 06:47:59.374571 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.374699 kubelet[4005]: E0120 06:47:59.374624 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.374699 kubelet[4005]: W0120 06:47:59.374653 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.374781 kubelet[4005]: E0120 06:47:59.374774 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.374868 kubelet[4005]: E0120 06:47:59.374847 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.374868 kubelet[4005]: W0120 06:47:59.374853 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.374868 kubelet[4005]: E0120 06:47:59.374859 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.375079 kubelet[4005]: E0120 06:47:59.375059 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.375079 kubelet[4005]: W0120 06:47:59.375065 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.375079 kubelet[4005]: E0120 06:47:59.375071 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.375429 kubelet[4005]: E0120 06:47:59.375395 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.375429 kubelet[4005]: W0120 06:47:59.375407 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.375429 kubelet[4005]: E0120 06:47:59.375415 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.375614 kubelet[4005]: E0120 06:47:59.375608 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.375690 kubelet[4005]: W0120 06:47:59.375651 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.375690 kubelet[4005]: E0120 06:47:59.375658 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.375880 kubelet[4005]: E0120 06:47:59.375858 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.375880 kubelet[4005]: W0120 06:47:59.375864 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.375880 kubelet[4005]: E0120 06:47:59.375870 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.400387 containerd[2553]: time="2026-01-20T06:47:59.400311249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57787f6dff-7mrtl,Uid:908a2b21-9cb5-4731-b2b8-e36673bf3665,Namespace:calico-system,Attempt:0,} returns sandbox id \"b8ddded6fe9a4b270ff906d55af0d476b2e69cdd4cc3ee036f86521b55e1e321\"" Jan 20 06:47:59.399000 audit[4507]: NETFILTER_CFG table=filter:118 family=2 entries=21 op=nft_register_rule pid=4507 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:47:59.399000 audit[4507]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd1d0c18a0 a2=0 a3=7ffd1d0c188c items=0 ppid=4134 pid=4507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:59.399000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:47:59.401950 containerd[2553]: time="2026-01-20T06:47:59.401895837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 20 06:47:59.404000 audit[4507]: NETFILTER_CFG table=nat:119 family=2 entries=12 op=nft_register_rule pid=4507 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:47:59.404000 audit[4507]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd1d0c18a0 a2=0 a3=0 items=0 ppid=4134 pid=4507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:59.404000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:47:59.459090 containerd[2553]: time="2026-01-20T06:47:59.459066654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xvzx8,Uid:2672f06c-6b2b-42d6-a05f-730e21e04285,Namespace:calico-system,Attempt:0,}" Jan 20 06:47:59.473550 kubelet[4005]: E0120 06:47:59.473534 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.473550 kubelet[4005]: W0120 06:47:59.473547 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.473656 kubelet[4005]: E0120 06:47:59.473558 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.473689 kubelet[4005]: E0120 06:47:59.473664 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.473689 kubelet[4005]: W0120 06:47:59.473670 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.473689 kubelet[4005]: E0120 06:47:59.473677 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.473834 kubelet[4005]: E0120 06:47:59.473825 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.473834 kubelet[4005]: W0120 06:47:59.473833 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.473897 kubelet[4005]: E0120 06:47:59.473848 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.474002 kubelet[4005]: E0120 06:47:59.473972 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.474002 kubelet[4005]: W0120 06:47:59.473980 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.474002 kubelet[4005]: E0120 06:47:59.473992 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.474114 kubelet[4005]: E0120 06:47:59.474104 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.474114 kubelet[4005]: W0120 06:47:59.474112 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.474203 kubelet[4005]: E0120 06:47:59.474121 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.474256 kubelet[4005]: E0120 06:47:59.474247 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.474256 kubelet[4005]: W0120 06:47:59.474252 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.474304 kubelet[4005]: E0120 06:47:59.474266 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.475243 kubelet[4005]: E0120 06:47:59.474356 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.475243 kubelet[4005]: W0120 06:47:59.474361 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.475243 kubelet[4005]: E0120 06:47:59.474371 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.475243 kubelet[4005]: E0120 06:47:59.474461 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.475243 kubelet[4005]: W0120 06:47:59.474464 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.475243 kubelet[4005]: E0120 06:47:59.474472 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.475243 kubelet[4005]: E0120 06:47:59.474575 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.475243 kubelet[4005]: W0120 06:47:59.474579 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.475243 kubelet[4005]: E0120 06:47:59.474587 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.475243 kubelet[4005]: E0120 06:47:59.474684 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.475386 kubelet[4005]: W0120 06:47:59.474689 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.475386 kubelet[4005]: E0120 06:47:59.474696 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.475386 kubelet[4005]: E0120 06:47:59.474771 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.475386 kubelet[4005]: W0120 06:47:59.474775 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.475386 kubelet[4005]: E0120 06:47:59.474781 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.475386 kubelet[4005]: E0120 06:47:59.474868 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.475386 kubelet[4005]: W0120 06:47:59.474873 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.475386 kubelet[4005]: E0120 06:47:59.474882 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.475386 kubelet[4005]: E0120 06:47:59.475031 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.475386 kubelet[4005]: W0120 06:47:59.475035 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.475509 kubelet[4005]: E0120 06:47:59.475043 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.475509 kubelet[4005]: E0120 06:47:59.475174 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.475509 kubelet[4005]: W0120 06:47:59.475188 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.475509 kubelet[4005]: E0120 06:47:59.475201 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.475652 kubelet[4005]: E0120 06:47:59.475642 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.475652 kubelet[4005]: W0120 06:47:59.475650 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.475728 kubelet[4005]: E0120 06:47:59.475719 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.475761 kubelet[4005]: E0120 06:47:59.475752 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.475761 kubelet[4005]: W0120 06:47:59.475760 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.475848 kubelet[4005]: E0120 06:47:59.475830 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.475893 kubelet[4005]: E0120 06:47:59.475860 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.475893 kubelet[4005]: W0120 06:47:59.475864 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.475893 kubelet[4005]: E0120 06:47:59.475872 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.475978 kubelet[4005]: E0120 06:47:59.475962 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.475978 kubelet[4005]: W0120 06:47:59.475967 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.476022 kubelet[4005]: E0120 06:47:59.475978 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.476102 kubelet[4005]: E0120 06:47:59.476092 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.476102 kubelet[4005]: W0120 06:47:59.476099 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.476149 kubelet[4005]: E0120 06:47:59.476120 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.476253 kubelet[4005]: E0120 06:47:59.476244 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.476253 kubelet[4005]: W0120 06:47:59.476252 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.476330 kubelet[4005]: E0120 06:47:59.476268 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.476405 kubelet[4005]: E0120 06:47:59.476396 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.476405 kubelet[4005]: W0120 06:47:59.476404 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.476453 kubelet[4005]: E0120 06:47:59.476417 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.476548 kubelet[4005]: E0120 06:47:59.476540 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.476548 kubelet[4005]: W0120 06:47:59.476547 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.476601 kubelet[4005]: E0120 06:47:59.476559 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.476666 kubelet[4005]: E0120 06:47:59.476658 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.476666 kubelet[4005]: W0120 06:47:59.476664 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.476712 kubelet[4005]: E0120 06:47:59.476670 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.476755 kubelet[4005]: E0120 06:47:59.476746 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.476755 kubelet[4005]: W0120 06:47:59.476753 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.476795 kubelet[4005]: E0120 06:47:59.476758 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.476875 kubelet[4005]: E0120 06:47:59.476864 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.476905 kubelet[4005]: W0120 06:47:59.476872 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.476905 kubelet[4005]: E0120 06:47:59.476885 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.483750 kubelet[4005]: E0120 06:47:59.483735 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:47:59.483750 kubelet[4005]: W0120 06:47:59.483747 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:47:59.483820 kubelet[4005]: E0120 06:47:59.483757 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:47:59.498601 containerd[2553]: time="2026-01-20T06:47:59.498573716Z" level=info msg="connecting to shim a680c520ba1b39d602810243e3607e094f65502a1318fcee1f9809b8afdb3966" address="unix:///run/containerd/s/c40b5faeb3d528f8e54351c42642d2d409a9d760ab5d32df2a3babd9ea56383c" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:47:59.515395 systemd[1]: Started cri-containerd-a680c520ba1b39d602810243e3607e094f65502a1318fcee1f9809b8afdb3966.scope - libcontainer container a680c520ba1b39d602810243e3607e094f65502a1318fcee1f9809b8afdb3966. Jan 20 06:47:59.521000 audit: BPF prog-id=180 op=LOAD Jan 20 06:47:59.522000 audit: BPF prog-id=181 op=LOAD Jan 20 06:47:59.522000 audit[4553]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4543 pid=4553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:59.522000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136383063353230626131623339643630323831303234336533363037 Jan 20 06:47:59.522000 audit: BPF prog-id=181 op=UNLOAD Jan 20 06:47:59.522000 audit[4553]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4543 pid=4553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:59.522000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136383063353230626131623339643630323831303234336533363037 Jan 20 06:47:59.522000 audit: BPF prog-id=182 op=LOAD Jan 20 06:47:59.522000 audit[4553]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4543 pid=4553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:59.522000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136383063353230626131623339643630323831303234336533363037 Jan 20 06:47:59.522000 audit: BPF prog-id=183 op=LOAD Jan 20 06:47:59.522000 audit[4553]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4543 pid=4553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:59.522000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136383063353230626131623339643630323831303234336533363037 Jan 20 06:47:59.522000 audit: BPF prog-id=183 op=UNLOAD Jan 20 06:47:59.522000 audit[4553]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4543 pid=4553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:59.522000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136383063353230626131623339643630323831303234336533363037 Jan 20 06:47:59.522000 audit: BPF prog-id=182 op=UNLOAD Jan 20 06:47:59.522000 audit[4553]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4543 pid=4553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:59.522000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136383063353230626131623339643630323831303234336533363037 Jan 20 06:47:59.522000 audit: BPF prog-id=184 op=LOAD Jan 20 06:47:59.522000 audit[4553]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4543 pid=4553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:47:59.522000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136383063353230626131623339643630323831303234336533363037 Jan 20 06:47:59.537272 containerd[2553]: time="2026-01-20T06:47:59.537193755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xvzx8,Uid:2672f06c-6b2b-42d6-a05f-730e21e04285,Namespace:calico-system,Attempt:0,} returns sandbox id \"a680c520ba1b39d602810243e3607e094f65502a1318fcee1f9809b8afdb3966\"" Jan 20 06:48:00.547980 kubelet[4005]: E0120 06:48:00.547030 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r95bt" podUID="feca3a47-a9f0-4272-a08e-b4b137171f9f" Jan 20 06:48:00.705576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount775506653.mount: Deactivated successfully. Jan 20 06:48:01.667992 containerd[2553]: time="2026-01-20T06:48:01.667967067Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:48:01.670230 containerd[2553]: time="2026-01-20T06:48:01.670139046Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 20 06:48:01.672728 containerd[2553]: time="2026-01-20T06:48:01.672708828Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:48:01.680243 containerd[2553]: time="2026-01-20T06:48:01.680186456Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:48:01.680634 containerd[2553]: time="2026-01-20T06:48:01.680497926Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.27857837s" Jan 20 06:48:01.680634 containerd[2553]: time="2026-01-20T06:48:01.680520850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 20 06:48:01.681515 containerd[2553]: time="2026-01-20T06:48:01.681418070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 20 06:48:01.691888 containerd[2553]: time="2026-01-20T06:48:01.691594292Z" level=info msg="CreateContainer within sandbox \"b8ddded6fe9a4b270ff906d55af0d476b2e69cdd4cc3ee036f86521b55e1e321\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 20 06:48:01.709144 containerd[2553]: time="2026-01-20T06:48:01.709108884Z" level=info msg="Container 802a81185865c84d1777c869c7363a6d8bad93be03206c21d306b5c12e969cb8: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:48:01.711662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4268907382.mount: Deactivated successfully. Jan 20 06:48:01.728727 containerd[2553]: time="2026-01-20T06:48:01.728706229Z" level=info msg="CreateContainer within sandbox \"b8ddded6fe9a4b270ff906d55af0d476b2e69cdd4cc3ee036f86521b55e1e321\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"802a81185865c84d1777c869c7363a6d8bad93be03206c21d306b5c12e969cb8\"" Jan 20 06:48:01.729038 containerd[2553]: time="2026-01-20T06:48:01.728980012Z" level=info msg="StartContainer for \"802a81185865c84d1777c869c7363a6d8bad93be03206c21d306b5c12e969cb8\"" Jan 20 06:48:01.729959 containerd[2553]: time="2026-01-20T06:48:01.729886653Z" level=info msg="connecting to shim 802a81185865c84d1777c869c7363a6d8bad93be03206c21d306b5c12e969cb8" address="unix:///run/containerd/s/4eafbadce3e6664d69102d926ff58d75ecc12df9a72146780f0d0e72994a52fd" protocol=ttrpc version=3 Jan 20 06:48:01.749349 systemd[1]: Started cri-containerd-802a81185865c84d1777c869c7363a6d8bad93be03206c21d306b5c12e969cb8.scope - libcontainer container 802a81185865c84d1777c869c7363a6d8bad93be03206c21d306b5c12e969cb8. Jan 20 06:48:01.759250 kernel: kauditd_printk_skb: 75 callbacks suppressed Jan 20 06:48:01.759310 kernel: audit: type=1334 audit(1768891681.755:579): prog-id=185 op=LOAD Jan 20 06:48:01.755000 audit: BPF prog-id=185 op=LOAD Jan 20 06:48:01.756000 audit: BPF prog-id=186 op=LOAD Jan 20 06:48:01.760368 kernel: audit: type=1334 audit(1768891681.756:580): prog-id=186 op=LOAD Jan 20 06:48:01.764988 kernel: audit: type=1300 audit(1768891681.756:580): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4424 pid=4591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:01.756000 audit[4591]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4424 pid=4591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:01.769951 kernel: audit: type=1327 audit(1768891681.756:580): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830326138313138353836356338346431373737633836396337333633 Jan 20 06:48:01.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830326138313138353836356338346431373737633836396337333633 Jan 20 06:48:01.756000 audit: BPF prog-id=186 op=UNLOAD Jan 20 06:48:01.756000 audit[4591]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4424 pid=4591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:01.776593 kernel: audit: type=1334 audit(1768891681.756:581): prog-id=186 op=UNLOAD Jan 20 06:48:01.776645 kernel: audit: type=1300 audit(1768891681.756:581): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4424 pid=4591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:01.781074 kernel: audit: type=1327 audit(1768891681.756:581): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830326138313138353836356338346431373737633836396337333633 Jan 20 06:48:01.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830326138313138353836356338346431373737633836396337333633 Jan 20 06:48:01.782401 kernel: audit: type=1334 audit(1768891681.756:582): prog-id=187 op=LOAD Jan 20 06:48:01.756000 audit: BPF prog-id=187 op=LOAD Jan 20 06:48:01.786254 kernel: audit: type=1300 audit(1768891681.756:582): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4424 pid=4591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:01.756000 audit[4591]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4424 pid=4591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:01.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830326138313138353836356338346431373737633836396337333633 Jan 20 06:48:01.756000 audit: BPF prog-id=188 op=LOAD Jan 20 06:48:01.756000 audit[4591]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4424 pid=4591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:01.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830326138313138353836356338346431373737633836396337333633 Jan 20 06:48:01.756000 audit: BPF prog-id=188 op=UNLOAD Jan 20 06:48:01.756000 audit[4591]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4424 pid=4591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:01.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830326138313138353836356338346431373737633836396337333633 Jan 20 06:48:01.756000 audit: BPF prog-id=187 op=UNLOAD Jan 20 06:48:01.756000 audit[4591]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4424 pid=4591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:01.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830326138313138353836356338346431373737633836396337333633 Jan 20 06:48:01.757000 audit: BPF prog-id=189 op=LOAD Jan 20 06:48:01.757000 audit[4591]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4424 pid=4591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:01.757000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830326138313138353836356338346431373737633836396337333633 Jan 20 06:48:01.795232 kernel: audit: type=1327 audit(1768891681.756:582): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830326138313138353836356338346431373737633836396337333633 Jan 20 06:48:01.811288 containerd[2553]: time="2026-01-20T06:48:01.811264090Z" level=info msg="StartContainer for \"802a81185865c84d1777c869c7363a6d8bad93be03206c21d306b5c12e969cb8\" returns successfully" Jan 20 06:48:02.547226 kubelet[4005]: E0120 06:48:02.547127 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r95bt" podUID="feca3a47-a9f0-4272-a08e-b4b137171f9f" Jan 20 06:48:02.626819 kubelet[4005]: I0120 06:48:02.626713 4005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-57787f6dff-7mrtl" podStartSLOduration=2.346852994 podStartE2EDuration="4.626641852s" podCreationTimestamp="2026-01-20 06:47:58 +0000 UTC" firstStartedPulling="2026-01-20 06:47:59.401316567 +0000 UTC m=+18.931136957" lastFinishedPulling="2026-01-20 06:48:01.681105422 +0000 UTC m=+21.210925815" observedRunningTime="2026-01-20 06:48:02.626182696 +0000 UTC m=+22.156003095" watchObservedRunningTime="2026-01-20 06:48:02.626641852 +0000 UTC m=+22.156462248" Jan 20 06:48:02.691902 kubelet[4005]: E0120 06:48:02.691885 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:48:02.691902 kubelet[4005]: W0120 06:48:02.691899 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:48:02.691902 kubelet[4005]: E0120 06:48:02.691913 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:48:02.692052 kubelet[4005]: E0120 06:48:02.692016 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:48:02.692052 kubelet[4005]: W0120 06:48:02.692021 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:48:02.692052 kubelet[4005]: E0120 06:48:02.692029 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:48:02.692205 kubelet[4005]: E0120 06:48:02.692182 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:48:02.692251 kubelet[4005]: W0120 06:48:02.692203 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:48:02.692251 kubelet[4005]: E0120 06:48:02.692233 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:48:02.692350 kubelet[4005]: E0120 06:48:02.692320 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:48:02.692350 kubelet[4005]: W0120 06:48:02.692341 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:48:02.692350 kubelet[4005]: E0120 06:48:02.692347 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:48:02.692443 kubelet[4005]: E0120 06:48:02.692434 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:48:02.692443 kubelet[4005]: W0120 06:48:02.692441 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:48:02.692503 kubelet[4005]: E0120 06:48:02.692447 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:48:02.692537 kubelet[4005]: E0120 06:48:02.692517 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:48:02.692537 kubelet[4005]: W0120 06:48:02.692521 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:48:02.692537 kubelet[4005]: E0120 06:48:02.692526 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:48:02.692618 kubelet[4005]: E0120 06:48:02.692593 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:48:02.692618 kubelet[4005]: W0120 06:48:02.692597 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:48:02.692618 kubelet[4005]: E0120 06:48:02.692602 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:48:02.692696 kubelet[4005]: E0120 06:48:02.692671 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:48:02.692696 kubelet[4005]: W0120 06:48:02.692676 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:48:02.692696 kubelet[4005]: E0120 06:48:02.692682 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:48:02.692776 kubelet[4005]: E0120 06:48:02.692772 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:48:02.692798 kubelet[4005]: W0120 06:48:02.692777 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:48:02.692798 kubelet[4005]: E0120 06:48:02.692782 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:48:02.692858 kubelet[4005]: E0120 06:48:02.692853 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:48:02.692858 kubelet[4005]: W0120 06:48:02.692858 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:48:02.692918 kubelet[4005]: E0120 06:48:02.692863 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:48:02.692944 kubelet[4005]: E0120 06:48:02.692935 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:48:02.692944 kubelet[4005]: W0120 06:48:02.692939 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:48:02.692998 kubelet[4005]: E0120 06:48:02.692944 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:48:02.693024 kubelet[4005]: E0120 06:48:02.693012 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:48:02.693024 kubelet[4005]: W0120 06:48:02.693016 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:48:02.693024 kubelet[4005]: E0120 06:48:02.693021 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:48:02.693102 kubelet[4005]: E0120 06:48:02.693091 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:48:02.693102 kubelet[4005]: W0120 06:48:02.693095 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:48:02.693102 kubelet[4005]: E0120 06:48:02.693100 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:48:02.693179 kubelet[4005]: E0120 06:48:02.693168 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:48:02.693179 kubelet[4005]: W0120 06:48:02.693173 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:48:02.693245 kubelet[4005]: E0120 06:48:02.693178 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:48:02.693274 kubelet[4005]: E0120 06:48:02.693266 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:48:02.693274 kubelet[4005]: W0120 06:48:02.693270 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:48:02.693331 kubelet[4005]: E0120 06:48:02.693275 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:48:02.699553 kubelet[4005]: E0120 06:48:02.699525 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:48:02.699553 kubelet[4005]: W0120 06:48:02.699550 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:48:02.699657 kubelet[4005]: E0120 06:48:02.699561 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:48:02.699772 kubelet[4005]: E0120 06:48:02.699666 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:48:02.699772 kubelet[4005]: W0120 06:48:02.699670 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:48:02.699772 kubelet[4005]: E0120 06:48:02.699677 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:48:02.699875 kubelet[4005]: E0120 06:48:02.699850 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:48:02.699875 kubelet[4005]: W0120 06:48:02.699873 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:48:02.699923 kubelet[4005]: E0120 06:48:02.699884 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:48:02.700003 kubelet[4005]: E0120 06:48:02.699979 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:48:02.700003 kubelet[4005]: W0120 06:48:02.700000 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:48:02.700059 kubelet[4005]: E0120 06:48:02.700007 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:48:02.700107 kubelet[4005]: E0120 06:48:02.700099 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:48:02.700107 kubelet[4005]: W0120 06:48:02.700107 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:48:02.700166 kubelet[4005]: E0120 06:48:02.700116 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:48:02.700281 kubelet[4005]: E0120 06:48:02.700259 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:48:02.700281 kubelet[4005]: W0120 06:48:02.700280 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:48:02.700325 kubelet[4005]: E0120 06:48:02.700288 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:48:02.700563 kubelet[4005]: E0120 06:48:02.700539 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:48:02.700563 kubelet[4005]: W0120 06:48:02.700549 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:48:02.700631 kubelet[4005]: E0120 06:48:02.700568 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:48:02.700826 kubelet[4005]: E0120 06:48:02.700738 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:48:02.700826 kubelet[4005]: W0120 06:48:02.700745 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:48:02.700826 kubelet[4005]: E0120 06:48:02.700755 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:48:02.700890 kubelet[4005]: E0120 06:48:02.700877 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:48:02.700890 kubelet[4005]: W0120 06:48:02.700882 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:48:02.700955 kubelet[4005]: E0120 06:48:02.700944 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:48:02.700985 kubelet[4005]: E0120 06:48:02.700977 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:48:02.700985 kubelet[4005]: W0120 06:48:02.700983 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:48:02.701067 kubelet[4005]: E0120 06:48:02.701055 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:48:02.701089 kubelet[4005]: E0120 06:48:02.701081 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:48:02.701089 kubelet[4005]: W0120 06:48:02.701085 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:48:02.701129 kubelet[4005]: E0120 06:48:02.701091 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:48:02.701252 kubelet[4005]: E0120 06:48:02.701204 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:48:02.701252 kubelet[4005]: W0120 06:48:02.701249 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:48:02.701311 kubelet[4005]: E0120 06:48:02.701257 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:48:02.701411 kubelet[4005]: E0120 06:48:02.701387 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:48:02.701411 kubelet[4005]: W0120 06:48:02.701408 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:48:02.701456 kubelet[4005]: E0120 06:48:02.701422 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:48:02.701600 kubelet[4005]: E0120 06:48:02.701560 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:48:02.701600 kubelet[4005]: W0120 06:48:02.701567 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:48:02.701600 kubelet[4005]: E0120 06:48:02.701574 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:48:02.701679 kubelet[4005]: E0120 06:48:02.701647 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:48:02.701679 kubelet[4005]: W0120 06:48:02.701652 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:48:02.701679 kubelet[4005]: E0120 06:48:02.701664 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:48:02.701963 kubelet[4005]: E0120 06:48:02.701820 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:48:02.701963 kubelet[4005]: W0120 06:48:02.701827 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:48:02.701963 kubelet[4005]: E0120 06:48:02.701848 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:48:02.702399 kubelet[4005]: E0120 06:48:02.702340 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:48:02.702399 kubelet[4005]: W0120 06:48:02.702352 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:48:02.702399 kubelet[4005]: E0120 06:48:02.702369 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:48:02.702526 kubelet[4005]: E0120 06:48:02.702500 4005 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 06:48:02.702526 kubelet[4005]: W0120 06:48:02.702506 4005 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 06:48:02.702526 kubelet[4005]: E0120 06:48:02.702514 4005 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 06:48:02.961554 containerd[2553]: time="2026-01-20T06:48:02.961529411Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:48:02.964088 containerd[2553]: time="2026-01-20T06:48:02.964057778Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4442579" Jan 20 06:48:02.970043 containerd[2553]: time="2026-01-20T06:48:02.969947749Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:48:02.973395 containerd[2553]: time="2026-01-20T06:48:02.973357141Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:48:02.974767 containerd[2553]: time="2026-01-20T06:48:02.974709514Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.29246792s" Jan 20 06:48:02.974767 containerd[2553]: time="2026-01-20T06:48:02.974736081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 20 06:48:02.976953 containerd[2553]: time="2026-01-20T06:48:02.976926104Z" level=info msg="CreateContainer within sandbox \"a680c520ba1b39d602810243e3607e094f65502a1318fcee1f9809b8afdb3966\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 20 06:48:02.994102 containerd[2553]: time="2026-01-20T06:48:02.992603940Z" level=info msg="Container 9eae433a337e6c820c34b73f02fada7b35a51e9606025f34336dbc12eb0557bc: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:48:03.012430 containerd[2553]: time="2026-01-20T06:48:03.012409127Z" level=info msg="CreateContainer within sandbox \"a680c520ba1b39d602810243e3607e094f65502a1318fcee1f9809b8afdb3966\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9eae433a337e6c820c34b73f02fada7b35a51e9606025f34336dbc12eb0557bc\"" Jan 20 06:48:03.013527 containerd[2553]: time="2026-01-20T06:48:03.012972997Z" level=info msg="StartContainer for \"9eae433a337e6c820c34b73f02fada7b35a51e9606025f34336dbc12eb0557bc\"" Jan 20 06:48:03.014920 containerd[2553]: time="2026-01-20T06:48:03.014887802Z" level=info msg="connecting to shim 9eae433a337e6c820c34b73f02fada7b35a51e9606025f34336dbc12eb0557bc" address="unix:///run/containerd/s/c40b5faeb3d528f8e54351c42642d2d409a9d760ab5d32df2a3babd9ea56383c" protocol=ttrpc version=3 Jan 20 06:48:03.036391 systemd[1]: Started cri-containerd-9eae433a337e6c820c34b73f02fada7b35a51e9606025f34336dbc12eb0557bc.scope - libcontainer container 9eae433a337e6c820c34b73f02fada7b35a51e9606025f34336dbc12eb0557bc. Jan 20 06:48:03.066000 audit: BPF prog-id=190 op=LOAD Jan 20 06:48:03.066000 audit[4669]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4543 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:03.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965616534333361333337653663383230633334623733663032666164 Jan 20 06:48:03.066000 audit: BPF prog-id=191 op=LOAD Jan 20 06:48:03.066000 audit[4669]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4543 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:03.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965616534333361333337653663383230633334623733663032666164 Jan 20 06:48:03.066000 audit: BPF prog-id=191 op=UNLOAD Jan 20 06:48:03.066000 audit[4669]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4543 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:03.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965616534333361333337653663383230633334623733663032666164 Jan 20 06:48:03.066000 audit: BPF prog-id=190 op=UNLOAD Jan 20 06:48:03.066000 audit[4669]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4543 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:03.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965616534333361333337653663383230633334623733663032666164 Jan 20 06:48:03.066000 audit: BPF prog-id=192 op=LOAD Jan 20 06:48:03.066000 audit[4669]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4543 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:03.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965616534333361333337653663383230633334623733663032666164 Jan 20 06:48:03.084271 containerd[2553]: time="2026-01-20T06:48:03.084192593Z" level=info msg="StartContainer for \"9eae433a337e6c820c34b73f02fada7b35a51e9606025f34336dbc12eb0557bc\" returns successfully" Jan 20 06:48:03.086351 systemd[1]: cri-containerd-9eae433a337e6c820c34b73f02fada7b35a51e9606025f34336dbc12eb0557bc.scope: Deactivated successfully. Jan 20 06:48:03.089264 containerd[2553]: time="2026-01-20T06:48:03.089192079Z" level=info msg="received container exit event container_id:\"9eae433a337e6c820c34b73f02fada7b35a51e9606025f34336dbc12eb0557bc\" id:\"9eae433a337e6c820c34b73f02fada7b35a51e9606025f34336dbc12eb0557bc\" pid:4681 exited_at:{seconds:1768891683 nanos:88947451}" Jan 20 06:48:03.089000 audit: BPF prog-id=192 op=UNLOAD Jan 20 06:48:03.103290 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9eae433a337e6c820c34b73f02fada7b35a51e9606025f34336dbc12eb0557bc-rootfs.mount: Deactivated successfully. Jan 20 06:48:03.618887 kubelet[4005]: I0120 06:48:03.618872 4005 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 06:48:04.548473 kubelet[4005]: E0120 06:48:04.547792 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r95bt" podUID="feca3a47-a9f0-4272-a08e-b4b137171f9f" Jan 20 06:48:05.624443 containerd[2553]: time="2026-01-20T06:48:05.624386356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 20 06:48:06.269151 kubelet[4005]: I0120 06:48:06.268940 4005 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 06:48:06.287000 audit[4722]: NETFILTER_CFG table=filter:120 family=2 entries=21 op=nft_register_rule pid=4722 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:48:06.287000 audit[4722]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd1a59de10 a2=0 a3=7ffd1a59ddfc items=0 ppid=4134 pid=4722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:06.287000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:48:06.294000 audit[4722]: NETFILTER_CFG table=nat:121 family=2 entries=19 op=nft_register_chain pid=4722 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:48:06.294000 audit[4722]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd1a59de10 a2=0 a3=7ffd1a59ddfc items=0 ppid=4134 pid=4722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:06.294000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:48:06.547421 kubelet[4005]: E0120 06:48:06.547116 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r95bt" podUID="feca3a47-a9f0-4272-a08e-b4b137171f9f" Jan 20 06:48:08.550579 kubelet[4005]: E0120 06:48:08.550229 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r95bt" podUID="feca3a47-a9f0-4272-a08e-b4b137171f9f" Jan 20 06:48:09.033474 containerd[2553]: time="2026-01-20T06:48:09.033446508Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:48:09.036720 containerd[2553]: time="2026-01-20T06:48:09.036647586Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70445002" Jan 20 06:48:09.040050 containerd[2553]: time="2026-01-20T06:48:09.040029882Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:48:09.043493 containerd[2553]: time="2026-01-20T06:48:09.043419384Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:48:09.043894 containerd[2553]: time="2026-01-20T06:48:09.043824714Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.41941213s" Jan 20 06:48:09.043894 containerd[2553]: time="2026-01-20T06:48:09.043846888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 20 06:48:09.045817 containerd[2553]: time="2026-01-20T06:48:09.045311952Z" level=info msg="CreateContainer within sandbox \"a680c520ba1b39d602810243e3607e094f65502a1318fcee1f9809b8afdb3966\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 20 06:48:09.066311 containerd[2553]: time="2026-01-20T06:48:09.063646251Z" level=info msg="Container f2de2ad4ec6bf17577dc7e15eae9717a489339e20294c4b542462ee51a8d7c98: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:48:09.082080 containerd[2553]: time="2026-01-20T06:48:09.081776513Z" level=info msg="CreateContainer within sandbox \"a680c520ba1b39d602810243e3607e094f65502a1318fcee1f9809b8afdb3966\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f2de2ad4ec6bf17577dc7e15eae9717a489339e20294c4b542462ee51a8d7c98\"" Jan 20 06:48:09.084178 containerd[2553]: time="2026-01-20T06:48:09.084159098Z" level=info msg="StartContainer for \"f2de2ad4ec6bf17577dc7e15eae9717a489339e20294c4b542462ee51a8d7c98\"" Jan 20 06:48:09.086654 containerd[2553]: time="2026-01-20T06:48:09.086627084Z" level=info msg="connecting to shim f2de2ad4ec6bf17577dc7e15eae9717a489339e20294c4b542462ee51a8d7c98" address="unix:///run/containerd/s/c40b5faeb3d528f8e54351c42642d2d409a9d760ab5d32df2a3babd9ea56383c" protocol=ttrpc version=3 Jan 20 06:48:09.105381 systemd[1]: Started cri-containerd-f2de2ad4ec6bf17577dc7e15eae9717a489339e20294c4b542462ee51a8d7c98.scope - libcontainer container f2de2ad4ec6bf17577dc7e15eae9717a489339e20294c4b542462ee51a8d7c98. Jan 20 06:48:09.139000 audit: BPF prog-id=193 op=LOAD Jan 20 06:48:09.142195 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 20 06:48:09.142264 kernel: audit: type=1334 audit(1768891689.139:595): prog-id=193 op=LOAD Jan 20 06:48:09.139000 audit[4731]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4543 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:09.148417 kernel: audit: type=1300 audit(1768891689.139:595): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4543 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:09.139000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632646532616434656336626631373537376463376531356561653937 Jan 20 06:48:09.154109 kernel: audit: type=1327 audit(1768891689.139:595): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632646532616434656336626631373537376463376531356561653937 Jan 20 06:48:09.139000 audit: BPF prog-id=194 op=LOAD Jan 20 06:48:09.139000 audit[4731]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4543 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:09.163610 kernel: audit: type=1334 audit(1768891689.139:596): prog-id=194 op=LOAD Jan 20 06:48:09.163660 kernel: audit: type=1300 audit(1768891689.139:596): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4543 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:09.139000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632646532616434656336626631373537376463376531356561653937 Jan 20 06:48:09.168378 kernel: audit: type=1327 audit(1768891689.139:596): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632646532616434656336626631373537376463376531356561653937 Jan 20 06:48:09.139000 audit: BPF prog-id=194 op=UNLOAD Jan 20 06:48:09.171002 kernel: audit: type=1334 audit(1768891689.139:597): prog-id=194 op=UNLOAD Jan 20 06:48:09.139000 audit[4731]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4543 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:09.176015 kernel: audit: type=1300 audit(1768891689.139:597): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4543 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:09.139000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632646532616434656336626631373537376463376531356561653937 Jan 20 06:48:09.182503 kernel: audit: type=1327 audit(1768891689.139:597): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632646532616434656336626631373537376463376531356561653937 Jan 20 06:48:09.182560 containerd[2553]: time="2026-01-20T06:48:09.181398036Z" level=info msg="StartContainer for \"f2de2ad4ec6bf17577dc7e15eae9717a489339e20294c4b542462ee51a8d7c98\" returns successfully" Jan 20 06:48:09.139000 audit: BPF prog-id=193 op=UNLOAD Jan 20 06:48:09.185427 kernel: audit: type=1334 audit(1768891689.139:598): prog-id=193 op=UNLOAD Jan 20 06:48:09.139000 audit[4731]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4543 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:09.139000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632646532616434656336626631373537376463376531356561653937 Jan 20 06:48:09.139000 audit: BPF prog-id=195 op=LOAD Jan 20 06:48:09.139000 audit[4731]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4543 pid=4731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:09.139000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632646532616434656336626631373537376463376531356561653937 Jan 20 06:48:10.547802 kubelet[4005]: E0120 06:48:10.547151 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r95bt" podUID="feca3a47-a9f0-4272-a08e-b4b137171f9f" Jan 20 06:48:11.645080 containerd[2553]: time="2026-01-20T06:48:11.645026409Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 06:48:11.646328 systemd[1]: cri-containerd-f2de2ad4ec6bf17577dc7e15eae9717a489339e20294c4b542462ee51a8d7c98.scope: Deactivated successfully. Jan 20 06:48:11.646585 systemd[1]: cri-containerd-f2de2ad4ec6bf17577dc7e15eae9717a489339e20294c4b542462ee51a8d7c98.scope: Consumed 363ms CPU time, 189.3M memory peak, 171.3M written to disk. Jan 20 06:48:11.648257 containerd[2553]: time="2026-01-20T06:48:11.648199956Z" level=info msg="received container exit event container_id:\"f2de2ad4ec6bf17577dc7e15eae9717a489339e20294c4b542462ee51a8d7c98\" id:\"f2de2ad4ec6bf17577dc7e15eae9717a489339e20294c4b542462ee51a8d7c98\" pid:4744 exited_at:{seconds:1768891691 nanos:648016868}" Jan 20 06:48:11.649000 audit: BPF prog-id=195 op=UNLOAD Jan 20 06:48:11.665199 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2de2ad4ec6bf17577dc7e15eae9717a489339e20294c4b542462ee51a8d7c98-rootfs.mount: Deactivated successfully. Jan 20 06:48:11.674927 kubelet[4005]: I0120 06:48:11.674460 4005 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 20 06:48:11.711983 systemd[1]: Created slice kubepods-besteffort-pod580ddd7e_5bcf_4d6b_b0d4_dddc2d6ea666.slice - libcontainer container kubepods-besteffort-pod580ddd7e_5bcf_4d6b_b0d4_dddc2d6ea666.slice. Jan 20 06:48:11.724483 systemd[1]: Created slice kubepods-burstable-pod02cc045c_02cb_4e4c_b380_c45b5c3edaed.slice - libcontainer container kubepods-burstable-pod02cc045c_02cb_4e4c_b380_c45b5c3edaed.slice. Jan 20 06:48:11.731841 systemd[1]: Created slice kubepods-burstable-pod7274afc5_8df6_4ee4_b52e_bf6155c0f0e9.slice - libcontainer container kubepods-burstable-pod7274afc5_8df6_4ee4_b52e_bf6155c0f0e9.slice. Jan 20 06:48:11.740843 systemd[1]: Created slice kubepods-besteffort-pod521ce380_6f9e_4050_b213_569fcc069aed.slice - libcontainer container kubepods-besteffort-pod521ce380_6f9e_4050_b213_569fcc069aed.slice. Jan 20 06:48:11.746899 systemd[1]: Created slice kubepods-besteffort-pod2309f609_f83d_4aea_8896_a25cb505ea38.slice - libcontainer container kubepods-besteffort-pod2309f609_f83d_4aea_8896_a25cb505ea38.slice. Jan 20 06:48:11.753270 systemd[1]: Created slice kubepods-besteffort-pod6a07f6bf_8507_4691_9e22_698d9549bb6f.slice - libcontainer container kubepods-besteffort-pod6a07f6bf_8507_4691_9e22_698d9549bb6f.slice. Jan 20 06:48:11.755203 kubelet[4005]: I0120 06:48:11.755187 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7z5w\" (UniqueName: \"kubernetes.io/projected/521ce380-6f9e-4050-b213-569fcc069aed-kube-api-access-v7z5w\") pod \"goldmane-666569f655-c4kj6\" (UID: \"521ce380-6f9e-4050-b213-569fcc069aed\") " pod="calico-system/goldmane-666569f655-c4kj6" Jan 20 06:48:11.755203 kubelet[4005]: I0120 06:48:11.755264 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xdd5\" (UniqueName: \"kubernetes.io/projected/2309f609-f83d-4aea-8896-a25cb505ea38-kube-api-access-5xdd5\") pod \"calico-apiserver-64f54f655c-9bp6l\" (UID: \"2309f609-f83d-4aea-8896-a25cb505ea38\") " pod="calico-apiserver/calico-apiserver-64f54f655c-9bp6l" Jan 20 06:48:11.755203 kubelet[4005]: I0120 06:48:11.755285 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2w5s\" (UniqueName: \"kubernetes.io/projected/6205d977-3cd2-45d3-97f2-85111cfa22a7-kube-api-access-g2w5s\") pod \"calico-apiserver-64f54f655c-d627v\" (UID: \"6205d977-3cd2-45d3-97f2-85111cfa22a7\") " pod="calico-apiserver/calico-apiserver-64f54f655c-d627v" Jan 20 06:48:11.755203 kubelet[4005]: I0120 06:48:11.755303 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/580ddd7e-5bcf-4d6b-b0d4-dddc2d6ea666-whisker-backend-key-pair\") pod \"whisker-5c68b4674b-2gdb5\" (UID: \"580ddd7e-5bcf-4d6b-b0d4-dddc2d6ea666\") " pod="calico-system/whisker-5c68b4674b-2gdb5" Jan 20 06:48:11.755203 kubelet[4005]: I0120 06:48:11.755323 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/580ddd7e-5bcf-4d6b-b0d4-dddc2d6ea666-whisker-ca-bundle\") pod \"whisker-5c68b4674b-2gdb5\" (UID: \"580ddd7e-5bcf-4d6b-b0d4-dddc2d6ea666\") " pod="calico-system/whisker-5c68b4674b-2gdb5" Jan 20 06:48:11.755773 kubelet[4005]: I0120 06:48:11.755341 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8zp5\" (UniqueName: \"kubernetes.io/projected/580ddd7e-5bcf-4d6b-b0d4-dddc2d6ea666-kube-api-access-n8zp5\") pod \"whisker-5c68b4674b-2gdb5\" (UID: \"580ddd7e-5bcf-4d6b-b0d4-dddc2d6ea666\") " pod="calico-system/whisker-5c68b4674b-2gdb5" Jan 20 06:48:11.755773 kubelet[4005]: I0120 06:48:11.755358 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2309f609-f83d-4aea-8896-a25cb505ea38-calico-apiserver-certs\") pod \"calico-apiserver-64f54f655c-9bp6l\" (UID: \"2309f609-f83d-4aea-8896-a25cb505ea38\") " pod="calico-apiserver/calico-apiserver-64f54f655c-9bp6l" Jan 20 06:48:11.755773 kubelet[4005]: I0120 06:48:11.755378 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02cc045c-02cb-4e4c-b380-c45b5c3edaed-config-volume\") pod \"coredns-668d6bf9bc-4zr2j\" (UID: \"02cc045c-02cb-4e4c-b380-c45b5c3edaed\") " pod="kube-system/coredns-668d6bf9bc-4zr2j" Jan 20 06:48:11.755773 kubelet[4005]: I0120 06:48:11.755396 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdhmv\" (UniqueName: \"kubernetes.io/projected/02cc045c-02cb-4e4c-b380-c45b5c3edaed-kube-api-access-rdhmv\") pod \"coredns-668d6bf9bc-4zr2j\" (UID: \"02cc045c-02cb-4e4c-b380-c45b5c3edaed\") " pod="kube-system/coredns-668d6bf9bc-4zr2j" Jan 20 06:48:11.755773 kubelet[4005]: I0120 06:48:11.755412 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6205d977-3cd2-45d3-97f2-85111cfa22a7-calico-apiserver-certs\") pod \"calico-apiserver-64f54f655c-d627v\" (UID: \"6205d977-3cd2-45d3-97f2-85111cfa22a7\") " pod="calico-apiserver/calico-apiserver-64f54f655c-d627v" Jan 20 06:48:11.755886 kubelet[4005]: I0120 06:48:11.755429 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/521ce380-6f9e-4050-b213-569fcc069aed-goldmane-ca-bundle\") pod \"goldmane-666569f655-c4kj6\" (UID: \"521ce380-6f9e-4050-b213-569fcc069aed\") " pod="calico-system/goldmane-666569f655-c4kj6" Jan 20 06:48:11.755886 kubelet[4005]: I0120 06:48:11.755445 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/521ce380-6f9e-4050-b213-569fcc069aed-config\") pod \"goldmane-666569f655-c4kj6\" (UID: \"521ce380-6f9e-4050-b213-569fcc069aed\") " pod="calico-system/goldmane-666569f655-c4kj6" Jan 20 06:48:11.755886 kubelet[4005]: I0120 06:48:11.755464 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/521ce380-6f9e-4050-b213-569fcc069aed-goldmane-key-pair\") pod \"goldmane-666569f655-c4kj6\" (UID: \"521ce380-6f9e-4050-b213-569fcc069aed\") " pod="calico-system/goldmane-666569f655-c4kj6" Jan 20 06:48:11.755886 kubelet[4005]: I0120 06:48:11.755484 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7274afc5-8df6-4ee4-b52e-bf6155c0f0e9-config-volume\") pod \"coredns-668d6bf9bc-2tllm\" (UID: \"7274afc5-8df6-4ee4-b52e-bf6155c0f0e9\") " pod="kube-system/coredns-668d6bf9bc-2tllm" Jan 20 06:48:11.755886 kubelet[4005]: I0120 06:48:11.755501 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a07f6bf-8507-4691-9e22-698d9549bb6f-tigera-ca-bundle\") pod \"calico-kube-controllers-569b956df8-vdchn\" (UID: \"6a07f6bf-8507-4691-9e22-698d9549bb6f\") " pod="calico-system/calico-kube-controllers-569b956df8-vdchn" Jan 20 06:48:11.755990 kubelet[4005]: I0120 06:48:11.755519 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfbl6\" (UniqueName: \"kubernetes.io/projected/6a07f6bf-8507-4691-9e22-698d9549bb6f-kube-api-access-sfbl6\") pod \"calico-kube-controllers-569b956df8-vdchn\" (UID: \"6a07f6bf-8507-4691-9e22-698d9549bb6f\") " pod="calico-system/calico-kube-controllers-569b956df8-vdchn" Jan 20 06:48:11.755990 kubelet[4005]: I0120 06:48:11.755536 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p85kk\" (UniqueName: \"kubernetes.io/projected/7274afc5-8df6-4ee4-b52e-bf6155c0f0e9-kube-api-access-p85kk\") pod \"coredns-668d6bf9bc-2tllm\" (UID: \"7274afc5-8df6-4ee4-b52e-bf6155c0f0e9\") " pod="kube-system/coredns-668d6bf9bc-2tllm" Jan 20 06:48:11.757912 systemd[1]: Created slice kubepods-besteffort-pod6205d977_3cd2_45d3_97f2_85111cfa22a7.slice - libcontainer container kubepods-besteffort-pod6205d977_3cd2_45d3_97f2_85111cfa22a7.slice. Jan 20 06:48:12.022006 containerd[2553]: time="2026-01-20T06:48:12.021946626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c68b4674b-2gdb5,Uid:580ddd7e-5bcf-4d6b-b0d4-dddc2d6ea666,Namespace:calico-system,Attempt:0,}" Jan 20 06:48:12.029924 containerd[2553]: time="2026-01-20T06:48:12.029901722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4zr2j,Uid:02cc045c-02cb-4e4c-b380-c45b5c3edaed,Namespace:kube-system,Attempt:0,}" Jan 20 06:48:12.037563 containerd[2553]: time="2026-01-20T06:48:12.037531461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2tllm,Uid:7274afc5-8df6-4ee4-b52e-bf6155c0f0e9,Namespace:kube-system,Attempt:0,}" Jan 20 06:48:12.046016 containerd[2553]: time="2026-01-20T06:48:12.045980738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-c4kj6,Uid:521ce380-6f9e-4050-b213-569fcc069aed,Namespace:calico-system,Attempt:0,}" Jan 20 06:48:12.049492 containerd[2553]: time="2026-01-20T06:48:12.049460470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64f54f655c-9bp6l,Uid:2309f609-f83d-4aea-8896-a25cb505ea38,Namespace:calico-apiserver,Attempt:0,}" Jan 20 06:48:12.057127 containerd[2553]: time="2026-01-20T06:48:12.057066746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-569b956df8-vdchn,Uid:6a07f6bf-8507-4691-9e22-698d9549bb6f,Namespace:calico-system,Attempt:0,}" Jan 20 06:48:12.060645 containerd[2553]: time="2026-01-20T06:48:12.060616498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64f54f655c-d627v,Uid:6205d977-3cd2-45d3-97f2-85111cfa22a7,Namespace:calico-apiserver,Attempt:0,}" Jan 20 06:48:12.554382 systemd[1]: Created slice kubepods-besteffort-podfeca3a47_a9f0_4272_a08e_b4b137171f9f.slice - libcontainer container kubepods-besteffort-podfeca3a47_a9f0_4272_a08e_b4b137171f9f.slice. Jan 20 06:48:12.556347 containerd[2553]: time="2026-01-20T06:48:12.556329543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r95bt,Uid:feca3a47-a9f0-4272-a08e-b4b137171f9f,Namespace:calico-system,Attempt:0,}" Jan 20 06:48:18.506417 containerd[2553]: time="2026-01-20T06:48:18.506380775Z" level=error msg="Failed to destroy network for sandbox \"04138c144996d2318feba34f83bb9df22c5faa82a1f102f841f29d0ad62ced54\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:48:18.508518 systemd[1]: run-netns-cni\x2d675d09e7\x2d528f\x2dfd2b\x2db3c6\x2dd859bf3f8284.mount: Deactivated successfully. Jan 20 06:48:18.646122 containerd[2553]: time="2026-01-20T06:48:18.645989505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 20 06:48:18.865851 containerd[2553]: time="2026-01-20T06:48:18.865819688Z" level=error msg="Failed to destroy network for sandbox \"fa321015adba5f5f05a6013f020a9d79f21f8b3809749481af42a28bbc516d2f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:48:18.912851 containerd[2553]: time="2026-01-20T06:48:18.912810764Z" level=error msg="Failed to destroy network for sandbox \"d2110a25bd0d17adc303296a474ae0fe347a3c51e3ccd283300bc6f4f23d41a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:48:18.959781 containerd[2553]: time="2026-01-20T06:48:18.959739912Z" level=error msg="Failed to destroy network for sandbox \"9ad40814fbd5a40cc91333241d7380162cda4ba8256102eda587ddb83fa45805\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:48:19.004865 containerd[2553]: time="2026-01-20T06:48:19.004829503Z" level=error msg="Failed to destroy network for sandbox \"f408ee66de0112b65afba685854b8a6019477421a01cf68c6a4001040b1fe40a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:48:19.053964 containerd[2553]: time="2026-01-20T06:48:19.053934348Z" level=error msg="Failed to destroy network for sandbox \"32ea16f309d0156cf04c3526da23509d0233d4767b419b5816a922867392b2b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:48:19.126592 containerd[2553]: time="2026-01-20T06:48:19.126504956Z" level=error msg="Failed to destroy network for sandbox \"d2d2a87b72352e9344683a304abb97f7c738f4c858b889ce91abde2c8aaf3d64\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:48:19.126747 containerd[2553]: time="2026-01-20T06:48:19.126505963Z" level=error msg="Failed to destroy network for sandbox \"bcd9956a4d58a9dbc5876d598516a37135e42c67abd5d3101ae1c8d486eafc65\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:48:19.135732 containerd[2553]: time="2026-01-20T06:48:19.135686380Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c68b4674b-2gdb5,Uid:580ddd7e-5bcf-4d6b-b0d4-dddc2d6ea666,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"04138c144996d2318feba34f83bb9df22c5faa82a1f102f841f29d0ad62ced54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:48:19.135888 kubelet[4005]: E0120 06:48:19.135849 4005 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04138c144996d2318feba34f83bb9df22c5faa82a1f102f841f29d0ad62ced54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:48:19.136261 kubelet[4005]: E0120 06:48:19.135911 4005 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04138c144996d2318feba34f83bb9df22c5faa82a1f102f841f29d0ad62ced54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c68b4674b-2gdb5" Jan 20 06:48:19.136261 kubelet[4005]: E0120 06:48:19.135929 4005 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04138c144996d2318feba34f83bb9df22c5faa82a1f102f841f29d0ad62ced54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c68b4674b-2gdb5" Jan 20 06:48:19.136261 kubelet[4005]: E0120 06:48:19.135968 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5c68b4674b-2gdb5_calico-system(580ddd7e-5bcf-4d6b-b0d4-dddc2d6ea666)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5c68b4674b-2gdb5_calico-system(580ddd7e-5bcf-4d6b-b0d4-dddc2d6ea666)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"04138c144996d2318feba34f83bb9df22c5faa82a1f102f841f29d0ad62ced54\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5c68b4674b-2gdb5" podUID="580ddd7e-5bcf-4d6b-b0d4-dddc2d6ea666" Jan 20 06:48:19.375744 containerd[2553]: time="2026-01-20T06:48:19.375692171Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4zr2j,Uid:02cc045c-02cb-4e4c-b380-c45b5c3edaed,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa321015adba5f5f05a6013f020a9d79f21f8b3809749481af42a28bbc516d2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:48:19.375902 kubelet[4005]: E0120 06:48:19.375860 4005 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa321015adba5f5f05a6013f020a9d79f21f8b3809749481af42a28bbc516d2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:48:19.375947 kubelet[4005]: E0120 06:48:19.375922 4005 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa321015adba5f5f05a6013f020a9d79f21f8b3809749481af42a28bbc516d2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4zr2j" Jan 20 06:48:19.375947 kubelet[4005]: E0120 06:48:19.375940 4005 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa321015adba5f5f05a6013f020a9d79f21f8b3809749481af42a28bbc516d2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4zr2j" Jan 20 06:48:19.375999 kubelet[4005]: E0120 06:48:19.375971 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-4zr2j_kube-system(02cc045c-02cb-4e4c-b380-c45b5c3edaed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-4zr2j_kube-system(02cc045c-02cb-4e4c-b380-c45b5c3edaed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa321015adba5f5f05a6013f020a9d79f21f8b3809749481af42a28bbc516d2f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4zr2j" podUID="02cc045c-02cb-4e4c-b380-c45b5c3edaed" Jan 20 06:48:19.437180 containerd[2553]: time="2026-01-20T06:48:19.437119758Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2tllm,Uid:7274afc5-8df6-4ee4-b52e-bf6155c0f0e9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2110a25bd0d17adc303296a474ae0fe347a3c51e3ccd283300bc6f4f23d41a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:48:19.437606 kubelet[4005]: E0120 06:48:19.437265 4005 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2110a25bd0d17adc303296a474ae0fe347a3c51e3ccd283300bc6f4f23d41a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:48:19.437606 kubelet[4005]: E0120 06:48:19.437311 4005 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2110a25bd0d17adc303296a474ae0fe347a3c51e3ccd283300bc6f4f23d41a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2tllm" Jan 20 06:48:19.437606 kubelet[4005]: E0120 06:48:19.437331 4005 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2110a25bd0d17adc303296a474ae0fe347a3c51e3ccd283300bc6f4f23d41a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2tllm" Jan 20 06:48:19.437727 kubelet[4005]: E0120 06:48:19.437363 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2tllm_kube-system(7274afc5-8df6-4ee4-b52e-bf6155c0f0e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2tllm_kube-system(7274afc5-8df6-4ee4-b52e-bf6155c0f0e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d2110a25bd0d17adc303296a474ae0fe347a3c51e3ccd283300bc6f4f23d41a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2tllm" podUID="7274afc5-8df6-4ee4-b52e-bf6155c0f0e9" Jan 20 06:48:19.484942 containerd[2553]: time="2026-01-20T06:48:19.484902831Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-c4kj6,Uid:521ce380-6f9e-4050-b213-569fcc069aed,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ad40814fbd5a40cc91333241d7380162cda4ba8256102eda587ddb83fa45805\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:48:19.485160 kubelet[4005]: E0120 06:48:19.485053 4005 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ad40814fbd5a40cc91333241d7380162cda4ba8256102eda587ddb83fa45805\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:48:19.485160 kubelet[4005]: E0120 06:48:19.485082 4005 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ad40814fbd5a40cc91333241d7380162cda4ba8256102eda587ddb83fa45805\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-c4kj6" Jan 20 06:48:19.485160 kubelet[4005]: E0120 06:48:19.485110 4005 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ad40814fbd5a40cc91333241d7380162cda4ba8256102eda587ddb83fa45805\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-c4kj6" Jan 20 06:48:19.485291 kubelet[4005]: E0120 06:48:19.485145 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-c4kj6_calico-system(521ce380-6f9e-4050-b213-569fcc069aed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-c4kj6_calico-system(521ce380-6f9e-4050-b213-569fcc069aed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ad40814fbd5a40cc91333241d7380162cda4ba8256102eda587ddb83fa45805\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-c4kj6" podUID="521ce380-6f9e-4050-b213-569fcc069aed" Jan 20 06:48:19.488457 systemd[1]: run-netns-cni\x2df30cd685\x2dd816\x2d63e9\x2dfcf0\x2d3f2bb7034458.mount: Deactivated successfully. Jan 20 06:48:19.488540 systemd[1]: run-netns-cni\x2d101f2eda\x2dc955\x2da5fa\x2d0598\x2d55ff42df3ade.mount: Deactivated successfully. Jan 20 06:48:19.488581 systemd[1]: run-netns-cni\x2d5e8b6d7b\x2d4ae5\x2d3316\x2d9230\x2d7be73409ca7f.mount: Deactivated successfully. Jan 20 06:48:19.488622 systemd[1]: run-netns-cni\x2d2e466db1\x2d621d\x2d644c\x2db517\x2d4fae9ff1394f.mount: Deactivated successfully. Jan 20 06:48:19.531158 containerd[2553]: time="2026-01-20T06:48:19.531101547Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64f54f655c-9bp6l,Uid:2309f609-f83d-4aea-8896-a25cb505ea38,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f408ee66de0112b65afba685854b8a6019477421a01cf68c6a4001040b1fe40a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:48:19.531412 kubelet[4005]: E0120 06:48:19.531244 4005 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f408ee66de0112b65afba685854b8a6019477421a01cf68c6a4001040b1fe40a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:48:19.531412 kubelet[4005]: E0120 06:48:19.531276 4005 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f408ee66de0112b65afba685854b8a6019477421a01cf68c6a4001040b1fe40a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64f54f655c-9bp6l" Jan 20 06:48:19.531412 kubelet[4005]: E0120 06:48:19.531293 4005 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f408ee66de0112b65afba685854b8a6019477421a01cf68c6a4001040b1fe40a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64f54f655c-9bp6l" Jan 20 06:48:19.531499 kubelet[4005]: E0120 06:48:19.531326 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64f54f655c-9bp6l_calico-apiserver(2309f609-f83d-4aea-8896-a25cb505ea38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64f54f655c-9bp6l_calico-apiserver(2309f609-f83d-4aea-8896-a25cb505ea38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f408ee66de0112b65afba685854b8a6019477421a01cf68c6a4001040b1fe40a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64f54f655c-9bp6l" podUID="2309f609-f83d-4aea-8896-a25cb505ea38" Jan 20 06:48:19.627415 containerd[2553]: time="2026-01-20T06:48:19.627329666Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-569b956df8-vdchn,Uid:6a07f6bf-8507-4691-9e22-698d9549bb6f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"32ea16f309d0156cf04c3526da23509d0233d4767b419b5816a922867392b2b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:48:19.627519 kubelet[4005]: E0120 06:48:19.627494 4005 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32ea16f309d0156cf04c3526da23509d0233d4767b419b5816a922867392b2b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:48:19.627562 kubelet[4005]: E0120 06:48:19.627550 4005 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32ea16f309d0156cf04c3526da23509d0233d4767b419b5816a922867392b2b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-569b956df8-vdchn" Jan 20 06:48:19.627587 kubelet[4005]: E0120 06:48:19.627570 4005 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32ea16f309d0156cf04c3526da23509d0233d4767b419b5816a922867392b2b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-569b956df8-vdchn" Jan 20 06:48:19.627657 kubelet[4005]: E0120 06:48:19.627623 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-569b956df8-vdchn_calico-system(6a07f6bf-8507-4691-9e22-698d9549bb6f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-569b956df8-vdchn_calico-system(6a07f6bf-8507-4691-9e22-698d9549bb6f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"32ea16f309d0156cf04c3526da23509d0233d4767b419b5816a922867392b2b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-569b956df8-vdchn" podUID="6a07f6bf-8507-4691-9e22-698d9549bb6f" Jan 20 06:48:19.630119 containerd[2553]: time="2026-01-20T06:48:19.630037688Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64f54f655c-d627v,Uid:6205d977-3cd2-45d3-97f2-85111cfa22a7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2d2a87b72352e9344683a304abb97f7c738f4c858b889ce91abde2c8aaf3d64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:48:19.630262 kubelet[4005]: E0120 06:48:19.630202 4005 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2d2a87b72352e9344683a304abb97f7c738f4c858b889ce91abde2c8aaf3d64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:48:19.630307 kubelet[4005]: E0120 06:48:19.630276 4005 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2d2a87b72352e9344683a304abb97f7c738f4c858b889ce91abde2c8aaf3d64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64f54f655c-d627v" Jan 20 06:48:19.630336 kubelet[4005]: E0120 06:48:19.630305 4005 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2d2a87b72352e9344683a304abb97f7c738f4c858b889ce91abde2c8aaf3d64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64f54f655c-d627v" Jan 20 06:48:19.630359 kubelet[4005]: E0120 06:48:19.630337 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64f54f655c-d627v_calico-apiserver(6205d977-3cd2-45d3-97f2-85111cfa22a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64f54f655c-d627v_calico-apiserver(6205d977-3cd2-45d3-97f2-85111cfa22a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d2d2a87b72352e9344683a304abb97f7c738f4c858b889ce91abde2c8aaf3d64\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64f54f655c-d627v" podUID="6205d977-3cd2-45d3-97f2-85111cfa22a7" Jan 20 06:48:19.674265 containerd[2553]: time="2026-01-20T06:48:19.674195105Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r95bt,Uid:feca3a47-a9f0-4272-a08e-b4b137171f9f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcd9956a4d58a9dbc5876d598516a37135e42c67abd5d3101ae1c8d486eafc65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:48:19.674414 kubelet[4005]: E0120 06:48:19.674367 4005 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcd9956a4d58a9dbc5876d598516a37135e42c67abd5d3101ae1c8d486eafc65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 06:48:19.674458 kubelet[4005]: E0120 06:48:19.674417 4005 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcd9956a4d58a9dbc5876d598516a37135e42c67abd5d3101ae1c8d486eafc65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r95bt" Jan 20 06:48:19.674458 kubelet[4005]: E0120 06:48:19.674434 4005 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcd9956a4d58a9dbc5876d598516a37135e42c67abd5d3101ae1c8d486eafc65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r95bt" Jan 20 06:48:19.674515 kubelet[4005]: E0120 06:48:19.674462 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r95bt_calico-system(feca3a47-a9f0-4272-a08e-b4b137171f9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r95bt_calico-system(feca3a47-a9f0-4272-a08e-b4b137171f9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bcd9956a4d58a9dbc5876d598516a37135e42c67abd5d3101ae1c8d486eafc65\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r95bt" podUID="feca3a47-a9f0-4272-a08e-b4b137171f9f" Jan 20 06:48:28.311435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3006210862.mount: Deactivated successfully. Jan 20 06:48:28.333260 containerd[2553]: time="2026-01-20T06:48:28.333204412Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:48:28.335381 containerd[2553]: time="2026-01-20T06:48:28.335357040Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 20 06:48:28.337859 containerd[2553]: time="2026-01-20T06:48:28.337825997Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:48:28.342270 containerd[2553]: time="2026-01-20T06:48:28.342236081Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 06:48:28.342916 containerd[2553]: time="2026-01-20T06:48:28.342887654Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 9.696870092s" Jan 20 06:48:28.342965 containerd[2553]: time="2026-01-20T06:48:28.342922238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 20 06:48:28.354514 containerd[2553]: time="2026-01-20T06:48:28.354491138Z" level=info msg="CreateContainer within sandbox \"a680c520ba1b39d602810243e3607e094f65502a1318fcee1f9809b8afdb3966\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 20 06:48:28.376489 containerd[2553]: time="2026-01-20T06:48:28.376453527Z" level=info msg="Container 37f3062b504f8152758c78c58e5f561ad3ff16a12d1b73e33b3816f36cfa1354: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:48:28.379445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1457493031.mount: Deactivated successfully. Jan 20 06:48:28.392409 containerd[2553]: time="2026-01-20T06:48:28.392385920Z" level=info msg="CreateContainer within sandbox \"a680c520ba1b39d602810243e3607e094f65502a1318fcee1f9809b8afdb3966\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"37f3062b504f8152758c78c58e5f561ad3ff16a12d1b73e33b3816f36cfa1354\"" Jan 20 06:48:28.392963 containerd[2553]: time="2026-01-20T06:48:28.392802815Z" level=info msg="StartContainer for \"37f3062b504f8152758c78c58e5f561ad3ff16a12d1b73e33b3816f36cfa1354\"" Jan 20 06:48:28.394046 containerd[2553]: time="2026-01-20T06:48:28.394022430Z" level=info msg="connecting to shim 37f3062b504f8152758c78c58e5f561ad3ff16a12d1b73e33b3816f36cfa1354" address="unix:///run/containerd/s/c40b5faeb3d528f8e54351c42642d2d409a9d760ab5d32df2a3babd9ea56383c" protocol=ttrpc version=3 Jan 20 06:48:28.409352 systemd[1]: Started cri-containerd-37f3062b504f8152758c78c58e5f561ad3ff16a12d1b73e33b3816f36cfa1354.scope - libcontainer container 37f3062b504f8152758c78c58e5f561ad3ff16a12d1b73e33b3816f36cfa1354. Jan 20 06:48:28.456000 audit: BPF prog-id=196 op=LOAD Jan 20 06:48:28.458984 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 20 06:48:28.459029 kernel: audit: type=1334 audit(1768891708.456:601): prog-id=196 op=LOAD Jan 20 06:48:28.456000 audit[5002]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4543 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:28.465344 kernel: audit: type=1300 audit(1768891708.456:601): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4543 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:28.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337663330363262353034663831353237353863373863353865356635 Jan 20 06:48:28.470491 kernel: audit: type=1327 audit(1768891708.456:601): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337663330363262353034663831353237353863373863353865356635 Jan 20 06:48:28.456000 audit: BPF prog-id=197 op=LOAD Jan 20 06:48:28.473163 kernel: audit: type=1334 audit(1768891708.456:602): prog-id=197 op=LOAD Jan 20 06:48:28.482590 kernel: audit: type=1300 audit(1768891708.456:602): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4543 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:28.456000 audit[5002]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4543 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:28.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337663330363262353034663831353237353863373863353865356635 Jan 20 06:48:28.491600 kernel: audit: type=1327 audit(1768891708.456:602): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337663330363262353034663831353237353863373863353865356635 Jan 20 06:48:28.491650 kernel: audit: type=1334 audit(1768891708.456:603): prog-id=197 op=UNLOAD Jan 20 06:48:28.456000 audit: BPF prog-id=197 op=UNLOAD Jan 20 06:48:28.456000 audit[5002]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4543 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:28.496831 kernel: audit: type=1300 audit(1768891708.456:603): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4543 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:28.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337663330363262353034663831353237353863373863353865356635 Jan 20 06:48:28.503227 kernel: audit: type=1327 audit(1768891708.456:603): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337663330363262353034663831353237353863373863353865356635 Jan 20 06:48:28.504143 containerd[2553]: time="2026-01-20T06:48:28.504078224Z" level=info msg="StartContainer for \"37f3062b504f8152758c78c58e5f561ad3ff16a12d1b73e33b3816f36cfa1354\" returns successfully" Jan 20 06:48:28.456000 audit: BPF prog-id=196 op=UNLOAD Jan 20 06:48:28.456000 audit[5002]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4543 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:28.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337663330363262353034663831353237353863373863353865356635 Jan 20 06:48:28.456000 audit: BPF prog-id=198 op=LOAD Jan 20 06:48:28.456000 audit[5002]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4543 pid=5002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:28.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337663330363262353034663831353237353863373863353865356635 Jan 20 06:48:28.509224 kernel: audit: type=1334 audit(1768891708.456:604): prog-id=196 op=UNLOAD Jan 20 06:48:28.681231 kubelet[4005]: I0120 06:48:28.681131 4005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xvzx8" podStartSLOduration=0.875380298 podStartE2EDuration="29.681116638s" podCreationTimestamp="2026-01-20 06:47:59 +0000 UTC" firstStartedPulling="2026-01-20 06:47:59.537889097 +0000 UTC m=+19.067709483" lastFinishedPulling="2026-01-20 06:48:28.343625428 +0000 UTC m=+47.873445823" observedRunningTime="2026-01-20 06:48:28.68002832 +0000 UTC m=+48.209848722" watchObservedRunningTime="2026-01-20 06:48:28.681116638 +0000 UTC m=+48.210937034" Jan 20 06:48:28.815554 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 20 06:48:28.815614 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 20 06:48:28.944838 kubelet[4005]: I0120 06:48:28.944434 4005 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/580ddd7e-5bcf-4d6b-b0d4-dddc2d6ea666-whisker-backend-key-pair\") pod \"580ddd7e-5bcf-4d6b-b0d4-dddc2d6ea666\" (UID: \"580ddd7e-5bcf-4d6b-b0d4-dddc2d6ea666\") " Jan 20 06:48:28.946284 kubelet[4005]: I0120 06:48:28.946264 4005 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/580ddd7e-5bcf-4d6b-b0d4-dddc2d6ea666-whisker-ca-bundle\") pod \"580ddd7e-5bcf-4d6b-b0d4-dddc2d6ea666\" (UID: \"580ddd7e-5bcf-4d6b-b0d4-dddc2d6ea666\") " Jan 20 06:48:28.946897 kubelet[4005]: I0120 06:48:28.946727 4005 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8zp5\" (UniqueName: \"kubernetes.io/projected/580ddd7e-5bcf-4d6b-b0d4-dddc2d6ea666-kube-api-access-n8zp5\") pod \"580ddd7e-5bcf-4d6b-b0d4-dddc2d6ea666\" (UID: \"580ddd7e-5bcf-4d6b-b0d4-dddc2d6ea666\") " Jan 20 06:48:28.948967 kubelet[4005]: I0120 06:48:28.946684 4005 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/580ddd7e-5bcf-4d6b-b0d4-dddc2d6ea666-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "580ddd7e-5bcf-4d6b-b0d4-dddc2d6ea666" (UID: "580ddd7e-5bcf-4d6b-b0d4-dddc2d6ea666"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 06:48:28.949224 kubelet[4005]: I0120 06:48:28.949108 4005 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/580ddd7e-5bcf-4d6b-b0d4-dddc2d6ea666-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "580ddd7e-5bcf-4d6b-b0d4-dddc2d6ea666" (UID: "580ddd7e-5bcf-4d6b-b0d4-dddc2d6ea666"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 06:48:28.950122 kubelet[4005]: I0120 06:48:28.950100 4005 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/580ddd7e-5bcf-4d6b-b0d4-dddc2d6ea666-kube-api-access-n8zp5" (OuterVolumeSpecName: "kube-api-access-n8zp5") pod "580ddd7e-5bcf-4d6b-b0d4-dddc2d6ea666" (UID: "580ddd7e-5bcf-4d6b-b0d4-dddc2d6ea666"). InnerVolumeSpecName "kube-api-access-n8zp5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 06:48:29.047514 kubelet[4005]: I0120 06:48:29.047496 4005 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/580ddd7e-5bcf-4d6b-b0d4-dddc2d6ea666-whisker-backend-key-pair\") on node \"ci-4585.0.0-n-7cf3a16d5e\" DevicePath \"\"" Jan 20 06:48:29.047514 kubelet[4005]: I0120 06:48:29.047514 4005 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/580ddd7e-5bcf-4d6b-b0d4-dddc2d6ea666-whisker-ca-bundle\") on node \"ci-4585.0.0-n-7cf3a16d5e\" DevicePath \"\"" Jan 20 06:48:29.047602 kubelet[4005]: I0120 06:48:29.047522 4005 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n8zp5\" (UniqueName: \"kubernetes.io/projected/580ddd7e-5bcf-4d6b-b0d4-dddc2d6ea666-kube-api-access-n8zp5\") on node \"ci-4585.0.0-n-7cf3a16d5e\" DevicePath \"\"" Jan 20 06:48:29.312161 systemd[1]: var-lib-kubelet-pods-580ddd7e\x2d5bcf\x2d4d6b\x2db0d4\x2ddddc2d6ea666-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn8zp5.mount: Deactivated successfully. Jan 20 06:48:29.312722 systemd[1]: var-lib-kubelet-pods-580ddd7e\x2d5bcf\x2d4d6b\x2db0d4\x2ddddc2d6ea666-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 20 06:48:29.669572 systemd[1]: Removed slice kubepods-besteffort-pod580ddd7e_5bcf_4d6b_b0d4_dddc2d6ea666.slice - libcontainer container kubepods-besteffort-pod580ddd7e_5bcf_4d6b_b0d4_dddc2d6ea666.slice. Jan 20 06:48:29.746021 systemd[1]: Created slice kubepods-besteffort-podf5d911e0_cdad_43cd_8151_f2928352d9f0.slice - libcontainer container kubepods-besteffort-podf5d911e0_cdad_43cd_8151_f2928352d9f0.slice. Jan 20 06:48:29.852731 kubelet[4005]: I0120 06:48:29.852710 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f5d911e0-cdad-43cd-8151-f2928352d9f0-whisker-backend-key-pair\") pod \"whisker-dbbb7d496-gvwhx\" (UID: \"f5d911e0-cdad-43cd-8151-f2928352d9f0\") " pod="calico-system/whisker-dbbb7d496-gvwhx" Jan 20 06:48:29.852968 kubelet[4005]: I0120 06:48:29.852739 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsgvt\" (UniqueName: \"kubernetes.io/projected/f5d911e0-cdad-43cd-8151-f2928352d9f0-kube-api-access-gsgvt\") pod \"whisker-dbbb7d496-gvwhx\" (UID: \"f5d911e0-cdad-43cd-8151-f2928352d9f0\") " pod="calico-system/whisker-dbbb7d496-gvwhx" Jan 20 06:48:29.852968 kubelet[4005]: I0120 06:48:29.852760 4005 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5d911e0-cdad-43cd-8151-f2928352d9f0-whisker-ca-bundle\") pod \"whisker-dbbb7d496-gvwhx\" (UID: \"f5d911e0-cdad-43cd-8151-f2928352d9f0\") " pod="calico-system/whisker-dbbb7d496-gvwhx" Jan 20 06:48:30.049534 containerd[2553]: time="2026-01-20T06:48:30.049357968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dbbb7d496-gvwhx,Uid:f5d911e0-cdad-43cd-8151-f2928352d9f0,Namespace:calico-system,Attempt:0,}" Jan 20 06:48:30.222339 systemd-networkd[2169]: calid91778ae9b9: Link UP Jan 20 06:48:30.223494 systemd-networkd[2169]: calid91778ae9b9: Gained carrier Jan 20 06:48:30.240655 containerd[2553]: 2026-01-20 06:48:30.102 [INFO][5139] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 06:48:30.240655 containerd[2553]: 2026-01-20 06:48:30.113 [INFO][5139] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4585.0.0--n--7cf3a16d5e-k8s-whisker--dbbb7d496--gvwhx-eth0 whisker-dbbb7d496- calico-system f5d911e0-cdad-43cd-8151-f2928352d9f0 921 0 2026-01-20 06:48:29 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:dbbb7d496 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4585.0.0-n-7cf3a16d5e whisker-dbbb7d496-gvwhx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid91778ae9b9 [] [] }} ContainerID="293759b876244aa330f1c33a38ffd832720e7a1ae91f02bacf3d524c5ea7a290" Namespace="calico-system" Pod="whisker-dbbb7d496-gvwhx" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-whisker--dbbb7d496--gvwhx-" Jan 20 06:48:30.240655 containerd[2553]: 2026-01-20 06:48:30.113 [INFO][5139] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="293759b876244aa330f1c33a38ffd832720e7a1ae91f02bacf3d524c5ea7a290" Namespace="calico-system" Pod="whisker-dbbb7d496-gvwhx" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-whisker--dbbb7d496--gvwhx-eth0" Jan 20 06:48:30.240655 containerd[2553]: 2026-01-20 06:48:30.144 [INFO][5200] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="293759b876244aa330f1c33a38ffd832720e7a1ae91f02bacf3d524c5ea7a290" HandleID="k8s-pod-network.293759b876244aa330f1c33a38ffd832720e7a1ae91f02bacf3d524c5ea7a290" Workload="ci--4585.0.0--n--7cf3a16d5e-k8s-whisker--dbbb7d496--gvwhx-eth0" Jan 20 06:48:30.240828 containerd[2553]: 2026-01-20 06:48:30.144 [INFO][5200] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="293759b876244aa330f1c33a38ffd832720e7a1ae91f02bacf3d524c5ea7a290" HandleID="k8s-pod-network.293759b876244aa330f1c33a38ffd832720e7a1ae91f02bacf3d524c5ea7a290" Workload="ci--4585.0.0--n--7cf3a16d5e-k8s-whisker--dbbb7d496--gvwhx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd6a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4585.0.0-n-7cf3a16d5e", "pod":"whisker-dbbb7d496-gvwhx", "timestamp":"2026-01-20 06:48:30.144574273 +0000 UTC"}, Hostname:"ci-4585.0.0-n-7cf3a16d5e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 06:48:30.240828 containerd[2553]: 2026-01-20 06:48:30.144 [INFO][5200] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 06:48:30.240828 containerd[2553]: 2026-01-20 06:48:30.144 [INFO][5200] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 06:48:30.240828 containerd[2553]: 2026-01-20 06:48:30.144 [INFO][5200] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4585.0.0-n-7cf3a16d5e' Jan 20 06:48:30.240828 containerd[2553]: 2026-01-20 06:48:30.149 [INFO][5200] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.293759b876244aa330f1c33a38ffd832720e7a1ae91f02bacf3d524c5ea7a290" host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:30.240828 containerd[2553]: 2026-01-20 06:48:30.152 [INFO][5200] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:30.240828 containerd[2553]: 2026-01-20 06:48:30.155 [INFO][5200] ipam/ipam.go 511: Trying affinity for 192.168.58.0/26 host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:30.240828 containerd[2553]: 2026-01-20 06:48:30.156 [INFO][5200] ipam/ipam.go 158: Attempting to load block cidr=192.168.58.0/26 host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:30.240828 containerd[2553]: 2026-01-20 06:48:30.158 [INFO][5200] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.58.0/26 host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:30.241026 containerd[2553]: 2026-01-20 06:48:30.158 [INFO][5200] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.58.0/26 handle="k8s-pod-network.293759b876244aa330f1c33a38ffd832720e7a1ae91f02bacf3d524c5ea7a290" host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:30.241026 containerd[2553]: 2026-01-20 06:48:30.159 [INFO][5200] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.293759b876244aa330f1c33a38ffd832720e7a1ae91f02bacf3d524c5ea7a290 Jan 20 06:48:30.241026 containerd[2553]: 2026-01-20 06:48:30.166 [INFO][5200] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.58.0/26 handle="k8s-pod-network.293759b876244aa330f1c33a38ffd832720e7a1ae91f02bacf3d524c5ea7a290" host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:30.241026 containerd[2553]: 2026-01-20 06:48:30.171 [INFO][5200] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.58.1/26] block=192.168.58.0/26 handle="k8s-pod-network.293759b876244aa330f1c33a38ffd832720e7a1ae91f02bacf3d524c5ea7a290" host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:30.241026 containerd[2553]: 2026-01-20 06:48:30.174 [INFO][5200] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.58.1/26] handle="k8s-pod-network.293759b876244aa330f1c33a38ffd832720e7a1ae91f02bacf3d524c5ea7a290" host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:30.241026 containerd[2553]: 2026-01-20 06:48:30.174 [INFO][5200] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 06:48:30.241026 containerd[2553]: 2026-01-20 06:48:30.174 [INFO][5200] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.58.1/26] IPv6=[] ContainerID="293759b876244aa330f1c33a38ffd832720e7a1ae91f02bacf3d524c5ea7a290" HandleID="k8s-pod-network.293759b876244aa330f1c33a38ffd832720e7a1ae91f02bacf3d524c5ea7a290" Workload="ci--4585.0.0--n--7cf3a16d5e-k8s-whisker--dbbb7d496--gvwhx-eth0" Jan 20 06:48:30.241155 containerd[2553]: 2026-01-20 06:48:30.178 [INFO][5139] cni-plugin/k8s.go 418: Populated endpoint ContainerID="293759b876244aa330f1c33a38ffd832720e7a1ae91f02bacf3d524c5ea7a290" Namespace="calico-system" Pod="whisker-dbbb7d496-gvwhx" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-whisker--dbbb7d496--gvwhx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4585.0.0--n--7cf3a16d5e-k8s-whisker--dbbb7d496--gvwhx-eth0", GenerateName:"whisker-dbbb7d496-", Namespace:"calico-system", SelfLink:"", UID:"f5d911e0-cdad-43cd-8151-f2928352d9f0", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 48, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"dbbb7d496", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4585.0.0-n-7cf3a16d5e", ContainerID:"", Pod:"whisker-dbbb7d496-gvwhx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.58.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid91778ae9b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:48:30.241155 containerd[2553]: 2026-01-20 06:48:30.179 [INFO][5139] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.1/32] ContainerID="293759b876244aa330f1c33a38ffd832720e7a1ae91f02bacf3d524c5ea7a290" Namespace="calico-system" Pod="whisker-dbbb7d496-gvwhx" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-whisker--dbbb7d496--gvwhx-eth0" Jan 20 06:48:30.243011 containerd[2553]: 2026-01-20 06:48:30.179 [INFO][5139] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid91778ae9b9 ContainerID="293759b876244aa330f1c33a38ffd832720e7a1ae91f02bacf3d524c5ea7a290" Namespace="calico-system" Pod="whisker-dbbb7d496-gvwhx" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-whisker--dbbb7d496--gvwhx-eth0" Jan 20 06:48:30.243011 containerd[2553]: 2026-01-20 06:48:30.222 [INFO][5139] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="293759b876244aa330f1c33a38ffd832720e7a1ae91f02bacf3d524c5ea7a290" Namespace="calico-system" Pod="whisker-dbbb7d496-gvwhx" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-whisker--dbbb7d496--gvwhx-eth0" Jan 20 06:48:30.243057 containerd[2553]: 2026-01-20 06:48:30.224 [INFO][5139] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="293759b876244aa330f1c33a38ffd832720e7a1ae91f02bacf3d524c5ea7a290" Namespace="calico-system" Pod="whisker-dbbb7d496-gvwhx" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-whisker--dbbb7d496--gvwhx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4585.0.0--n--7cf3a16d5e-k8s-whisker--dbbb7d496--gvwhx-eth0", GenerateName:"whisker-dbbb7d496-", Namespace:"calico-system", SelfLink:"", UID:"f5d911e0-cdad-43cd-8151-f2928352d9f0", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 48, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"dbbb7d496", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4585.0.0-n-7cf3a16d5e", ContainerID:"293759b876244aa330f1c33a38ffd832720e7a1ae91f02bacf3d524c5ea7a290", Pod:"whisker-dbbb7d496-gvwhx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.58.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid91778ae9b9", MAC:"6e:ab:f7:70:58:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:48:30.243123 containerd[2553]: 2026-01-20 06:48:30.237 [INFO][5139] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="293759b876244aa330f1c33a38ffd832720e7a1ae91f02bacf3d524c5ea7a290" Namespace="calico-system" Pod="whisker-dbbb7d496-gvwhx" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-whisker--dbbb7d496--gvwhx-eth0" Jan 20 06:48:30.285020 containerd[2553]: time="2026-01-20T06:48:30.284993357Z" level=info msg="connecting to shim 293759b876244aa330f1c33a38ffd832720e7a1ae91f02bacf3d524c5ea7a290" address="unix:///run/containerd/s/b91a3905646d94fb762c69c69fe48583b368476311a65d3b31b0a9107e82f104" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:48:30.320386 systemd[1]: Started cri-containerd-293759b876244aa330f1c33a38ffd832720e7a1ae91f02bacf3d524c5ea7a290.scope - libcontainer container 293759b876244aa330f1c33a38ffd832720e7a1ae91f02bacf3d524c5ea7a290. Jan 20 06:48:30.344000 audit: BPF prog-id=199 op=LOAD Jan 20 06:48:30.344000 audit: BPF prog-id=200 op=LOAD Jan 20 06:48:30.344000 audit[5269]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=5257 pid=5269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239333735396238373632343461613333306631633333613338666664 Jan 20 06:48:30.344000 audit: BPF prog-id=200 op=UNLOAD Jan 20 06:48:30.344000 audit[5269]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5257 pid=5269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239333735396238373632343461613333306631633333613338666664 Jan 20 06:48:30.344000 audit: BPF prog-id=201 op=LOAD Jan 20 06:48:30.344000 audit[5269]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=5257 pid=5269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239333735396238373632343461613333306631633333613338666664 Jan 20 06:48:30.344000 audit: BPF prog-id=202 op=LOAD Jan 20 06:48:30.344000 audit[5269]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=5257 pid=5269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239333735396238373632343461613333306631633333613338666664 Jan 20 06:48:30.344000 audit: BPF prog-id=202 op=UNLOAD Jan 20 06:48:30.344000 audit[5269]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5257 pid=5269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239333735396238373632343461613333306631633333613338666664 Jan 20 06:48:30.344000 audit: BPF prog-id=201 op=UNLOAD Jan 20 06:48:30.344000 audit[5269]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5257 pid=5269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239333735396238373632343461613333306631633333613338666664 Jan 20 06:48:30.344000 audit: BPF prog-id=203 op=LOAD Jan 20 06:48:30.344000 audit[5269]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=5257 pid=5269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239333735396238373632343461613333306631633333613338666664 Jan 20 06:48:30.399357 containerd[2553]: time="2026-01-20T06:48:30.399335920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dbbb7d496-gvwhx,Uid:f5d911e0-cdad-43cd-8151-f2928352d9f0,Namespace:calico-system,Attempt:0,} returns sandbox id \"293759b876244aa330f1c33a38ffd832720e7a1ae91f02bacf3d524c5ea7a290\"" Jan 20 06:48:30.398000 audit: BPF prog-id=204 op=LOAD Jan 20 06:48:30.398000 audit[5304]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe44491290 a2=98 a3=1fffffffffffffff items=0 ppid=5130 pid=5304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.398000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 06:48:30.398000 audit: BPF prog-id=204 op=UNLOAD Jan 20 06:48:30.398000 audit[5304]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe44491260 a3=0 items=0 ppid=5130 pid=5304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.398000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 06:48:30.398000 audit: BPF prog-id=205 op=LOAD Jan 20 06:48:30.398000 audit[5304]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe44491170 a2=94 a3=3 items=0 ppid=5130 pid=5304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.398000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 06:48:30.398000 audit: BPF prog-id=205 op=UNLOAD Jan 20 06:48:30.398000 audit[5304]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe44491170 a2=94 a3=3 items=0 ppid=5130 pid=5304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.398000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 06:48:30.398000 audit: BPF prog-id=206 op=LOAD Jan 20 06:48:30.398000 audit[5304]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe444911b0 a2=94 a3=7ffe44491390 items=0 ppid=5130 pid=5304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.398000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 06:48:30.398000 audit: BPF prog-id=206 op=UNLOAD Jan 20 06:48:30.398000 audit[5304]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe444911b0 a2=94 a3=7ffe44491390 items=0 ppid=5130 pid=5304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.398000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 06:48:30.399000 audit: BPF prog-id=207 op=LOAD Jan 20 06:48:30.399000 audit[5305]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcaf83c600 a2=98 a3=3 items=0 ppid=5130 pid=5305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.399000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:48:30.400000 audit: BPF prog-id=207 op=UNLOAD Jan 20 06:48:30.400000 audit[5305]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcaf83c5d0 a3=0 items=0 ppid=5130 pid=5305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.400000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:48:30.400000 audit: BPF prog-id=208 op=LOAD Jan 20 06:48:30.400000 audit[5305]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcaf83c3f0 a2=94 a3=54428f items=0 ppid=5130 pid=5305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.400000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:48:30.400000 audit: BPF prog-id=208 op=UNLOAD Jan 20 06:48:30.400000 audit[5305]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcaf83c3f0 a2=94 a3=54428f items=0 ppid=5130 pid=5305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.400000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:48:30.400000 audit: BPF prog-id=209 op=LOAD Jan 20 06:48:30.400000 audit[5305]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcaf83c420 a2=94 a3=2 items=0 ppid=5130 pid=5305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.400000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:48:30.400000 audit: BPF prog-id=209 op=UNLOAD Jan 20 06:48:30.400000 audit[5305]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcaf83c420 a2=0 a3=2 items=0 ppid=5130 pid=5305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.400000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:48:30.402596 containerd[2553]: time="2026-01-20T06:48:30.401870910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 06:48:30.504000 audit: BPF prog-id=210 op=LOAD Jan 20 06:48:30.504000 audit[5305]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcaf83c2e0 a2=94 a3=1 items=0 ppid=5130 pid=5305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.504000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:48:30.504000 audit: BPF prog-id=210 op=UNLOAD Jan 20 06:48:30.504000 audit[5305]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcaf83c2e0 a2=94 a3=1 items=0 ppid=5130 pid=5305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.504000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:48:30.513000 audit: BPF prog-id=211 op=LOAD Jan 20 06:48:30.513000 audit[5305]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcaf83c2d0 a2=94 a3=4 items=0 ppid=5130 pid=5305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.513000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:48:30.513000 audit: BPF prog-id=211 op=UNLOAD Jan 20 06:48:30.513000 audit[5305]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffcaf83c2d0 a2=0 a3=4 items=0 ppid=5130 pid=5305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.513000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:48:30.513000 audit: BPF prog-id=212 op=LOAD Jan 20 06:48:30.513000 audit[5305]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcaf83c130 a2=94 a3=5 items=0 ppid=5130 pid=5305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.513000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:48:30.513000 audit: BPF prog-id=212 op=UNLOAD Jan 20 06:48:30.513000 audit[5305]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffcaf83c130 a2=0 a3=5 items=0 ppid=5130 pid=5305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.513000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:48:30.513000 audit: BPF prog-id=213 op=LOAD Jan 20 06:48:30.513000 audit[5305]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcaf83c350 a2=94 a3=6 items=0 ppid=5130 pid=5305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.513000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:48:30.513000 audit: BPF prog-id=213 op=UNLOAD Jan 20 06:48:30.513000 audit[5305]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffcaf83c350 a2=0 a3=6 items=0 ppid=5130 pid=5305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.513000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:48:30.514000 audit: BPF prog-id=214 op=LOAD Jan 20 06:48:30.514000 audit[5305]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcaf83bb00 a2=94 a3=88 items=0 ppid=5130 pid=5305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.514000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:48:30.514000 audit: BPF prog-id=215 op=LOAD Jan 20 06:48:30.514000 audit[5305]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffcaf83b980 a2=94 a3=2 items=0 ppid=5130 pid=5305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.514000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:48:30.514000 audit: BPF prog-id=215 op=UNLOAD Jan 20 06:48:30.514000 audit[5305]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffcaf83b9b0 a2=0 a3=7ffcaf83bab0 items=0 ppid=5130 pid=5305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.514000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:48:30.514000 audit: BPF prog-id=214 op=UNLOAD Jan 20 06:48:30.514000 audit[5305]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=194b3d10 a2=0 a3=88fabe02c9771c86 items=0 ppid=5130 pid=5305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.514000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 06:48:30.528000 audit: BPF prog-id=216 op=LOAD Jan 20 06:48:30.528000 audit[5308]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcd4ec2680 a2=98 a3=1999999999999999 items=0 ppid=5130 pid=5308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.528000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 06:48:30.528000 audit: BPF prog-id=216 op=UNLOAD Jan 20 06:48:30.528000 audit[5308]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcd4ec2650 a3=0 items=0 ppid=5130 pid=5308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.528000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 06:48:30.528000 audit: BPF prog-id=217 op=LOAD Jan 20 06:48:30.528000 audit[5308]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcd4ec2560 a2=94 a3=ffff items=0 ppid=5130 pid=5308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.528000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 06:48:30.528000 audit: BPF prog-id=217 op=UNLOAD Jan 20 06:48:30.528000 audit[5308]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcd4ec2560 a2=94 a3=ffff items=0 ppid=5130 pid=5308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.528000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 06:48:30.528000 audit: BPF prog-id=218 op=LOAD Jan 20 06:48:30.528000 audit[5308]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcd4ec25a0 a2=94 a3=7ffcd4ec2780 items=0 ppid=5130 pid=5308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.528000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 06:48:30.528000 audit: BPF prog-id=218 op=UNLOAD Jan 20 06:48:30.528000 audit[5308]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcd4ec25a0 a2=94 a3=7ffcd4ec2780 items=0 ppid=5130 pid=5308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.528000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 06:48:30.550032 kubelet[4005]: I0120 06:48:30.549995 4005 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="580ddd7e-5bcf-4d6b-b0d4-dddc2d6ea666" path="/var/lib/kubelet/pods/580ddd7e-5bcf-4d6b-b0d4-dddc2d6ea666/volumes" Jan 20 06:48:30.550787 containerd[2553]: time="2026-01-20T06:48:30.550665150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64f54f655c-9bp6l,Uid:2309f609-f83d-4aea-8896-a25cb505ea38,Namespace:calico-apiserver,Attempt:0,}" Jan 20 06:48:30.641920 systemd-networkd[2169]: vxlan.calico: Link UP Jan 20 06:48:30.641925 systemd-networkd[2169]: vxlan.calico: Gained carrier Jan 20 06:48:30.651194 containerd[2553]: time="2026-01-20T06:48:30.651172112Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:48:30.654008 containerd[2553]: time="2026-01-20T06:48:30.653971590Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 06:48:30.654156 containerd[2553]: time="2026-01-20T06:48:30.654080187Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 06:48:30.657472 kubelet[4005]: E0120 06:48:30.655843 4005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 06:48:30.657472 kubelet[4005]: E0120 06:48:30.655878 4005 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 06:48:30.657544 kubelet[4005]: E0120 06:48:30.655990 4005 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:eaf0ffeebada4c75b92faa87b95bad61,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gsgvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dbbb7d496-gvwhx_calico-system(f5d911e0-cdad-43cd-8151-f2928352d9f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 06:48:30.659067 containerd[2553]: time="2026-01-20T06:48:30.659047118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 06:48:30.672000 audit: BPF prog-id=219 op=LOAD Jan 20 06:48:30.672000 audit[5353]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffefec83b60 a2=98 a3=0 items=0 ppid=5130 pid=5353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.672000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 06:48:30.672000 audit: BPF prog-id=219 op=UNLOAD Jan 20 06:48:30.672000 audit[5353]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffefec83b30 a3=0 items=0 ppid=5130 pid=5353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.672000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 06:48:30.672000 audit: BPF prog-id=220 op=LOAD Jan 20 06:48:30.672000 audit[5353]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffefec83970 a2=94 a3=54428f items=0 ppid=5130 pid=5353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.672000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 06:48:30.672000 audit: BPF prog-id=220 op=UNLOAD Jan 20 06:48:30.672000 audit[5353]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffefec83970 a2=94 a3=54428f items=0 ppid=5130 pid=5353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.672000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 06:48:30.672000 audit: BPF prog-id=221 op=LOAD Jan 20 06:48:30.672000 audit[5353]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffefec839a0 a2=94 a3=2 items=0 ppid=5130 pid=5353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.672000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 06:48:30.672000 audit: BPF prog-id=221 op=UNLOAD Jan 20 06:48:30.672000 audit[5353]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffefec839a0 a2=0 a3=2 items=0 ppid=5130 pid=5353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.672000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 06:48:30.672000 audit: BPF prog-id=222 op=LOAD Jan 20 06:48:30.672000 audit[5353]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffefec83750 a2=94 a3=4 items=0 ppid=5130 pid=5353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.672000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 06:48:30.672000 audit: BPF prog-id=222 op=UNLOAD Jan 20 06:48:30.672000 audit[5353]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffefec83750 a2=94 a3=4 items=0 ppid=5130 pid=5353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.672000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 06:48:30.672000 audit: BPF prog-id=223 op=LOAD Jan 20 06:48:30.672000 audit[5353]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffefec83850 a2=94 a3=7ffefec839d0 items=0 ppid=5130 pid=5353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.672000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 06:48:30.672000 audit: BPF prog-id=223 op=UNLOAD Jan 20 06:48:30.672000 audit[5353]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffefec83850 a2=0 a3=7ffefec839d0 items=0 ppid=5130 pid=5353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.672000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 06:48:30.675000 audit: BPF prog-id=224 op=LOAD Jan 20 06:48:30.675000 audit[5353]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffefec82f80 a2=94 a3=2 items=0 ppid=5130 pid=5353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.675000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 06:48:30.675000 audit: BPF prog-id=224 op=UNLOAD Jan 20 06:48:30.675000 audit[5353]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffefec82f80 a2=0 a3=2 items=0 ppid=5130 pid=5353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.675000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 06:48:30.675000 audit: BPF prog-id=225 op=LOAD Jan 20 06:48:30.675000 audit[5353]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffefec83080 a2=94 a3=30 items=0 ppid=5130 pid=5353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.675000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 06:48:30.685000 audit: BPF prog-id=226 op=LOAD Jan 20 06:48:30.685000 audit[5356]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff346f8280 a2=98 a3=0 items=0 ppid=5130 pid=5356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.685000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:48:30.687000 audit: BPF prog-id=226 op=UNLOAD Jan 20 06:48:30.687000 audit[5356]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff346f8250 a3=0 items=0 ppid=5130 pid=5356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.687000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:48:30.687000 audit: BPF prog-id=227 op=LOAD Jan 20 06:48:30.687000 audit[5356]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff346f8070 a2=94 a3=54428f items=0 ppid=5130 pid=5356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.687000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:48:30.687000 audit: BPF prog-id=227 op=UNLOAD Jan 20 06:48:30.687000 audit[5356]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff346f8070 a2=94 a3=54428f items=0 ppid=5130 pid=5356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.687000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:48:30.687000 audit: BPF prog-id=228 op=LOAD Jan 20 06:48:30.687000 audit[5356]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff346f80a0 a2=94 a3=2 items=0 ppid=5130 pid=5356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.687000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:48:30.687000 audit: BPF prog-id=228 op=UNLOAD Jan 20 06:48:30.687000 audit[5356]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff346f80a0 a2=0 a3=2 items=0 ppid=5130 pid=5356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.687000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:48:30.718422 systemd-networkd[2169]: calia33f373d142: Link UP Jan 20 06:48:30.721110 systemd-networkd[2169]: calia33f373d142: Gained carrier Jan 20 06:48:30.737410 containerd[2553]: 2026-01-20 06:48:30.623 [INFO][5320] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4585.0.0--n--7cf3a16d5e-k8s-calico--apiserver--64f54f655c--9bp6l-eth0 calico-apiserver-64f54f655c- calico-apiserver 2309f609-f83d-4aea-8896-a25cb505ea38 836 0 2026-01-20 06:47:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64f54f655c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4585.0.0-n-7cf3a16d5e calico-apiserver-64f54f655c-9bp6l eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia33f373d142 [] [] }} ContainerID="af918384071aeeef46f29fcc809f5a83a90d4041ae9bff154069816eeff286cf" Namespace="calico-apiserver" Pod="calico-apiserver-64f54f655c-9bp6l" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-calico--apiserver--64f54f655c--9bp6l-" Jan 20 06:48:30.737410 containerd[2553]: 2026-01-20 06:48:30.623 [INFO][5320] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="af918384071aeeef46f29fcc809f5a83a90d4041ae9bff154069816eeff286cf" Namespace="calico-apiserver" Pod="calico-apiserver-64f54f655c-9bp6l" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-calico--apiserver--64f54f655c--9bp6l-eth0" Jan 20 06:48:30.737410 containerd[2553]: 2026-01-20 06:48:30.665 [INFO][5335] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="af918384071aeeef46f29fcc809f5a83a90d4041ae9bff154069816eeff286cf" HandleID="k8s-pod-network.af918384071aeeef46f29fcc809f5a83a90d4041ae9bff154069816eeff286cf" Workload="ci--4585.0.0--n--7cf3a16d5e-k8s-calico--apiserver--64f54f655c--9bp6l-eth0" Jan 20 06:48:30.737622 containerd[2553]: 2026-01-20 06:48:30.665 [INFO][5335] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="af918384071aeeef46f29fcc809f5a83a90d4041ae9bff154069816eeff286cf" HandleID="k8s-pod-network.af918384071aeeef46f29fcc809f5a83a90d4041ae9bff154069816eeff286cf" Workload="ci--4585.0.0--n--7cf3a16d5e-k8s-calico--apiserver--64f54f655c--9bp6l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad370), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4585.0.0-n-7cf3a16d5e", "pod":"calico-apiserver-64f54f655c-9bp6l", "timestamp":"2026-01-20 06:48:30.665608625 +0000 UTC"}, Hostname:"ci-4585.0.0-n-7cf3a16d5e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 06:48:30.737622 containerd[2553]: 2026-01-20 06:48:30.666 [INFO][5335] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 06:48:30.737622 containerd[2553]: 2026-01-20 06:48:30.666 [INFO][5335] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 06:48:30.737622 containerd[2553]: 2026-01-20 06:48:30.666 [INFO][5335] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4585.0.0-n-7cf3a16d5e' Jan 20 06:48:30.737622 containerd[2553]: 2026-01-20 06:48:30.678 [INFO][5335] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.af918384071aeeef46f29fcc809f5a83a90d4041ae9bff154069816eeff286cf" host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:30.737622 containerd[2553]: 2026-01-20 06:48:30.688 [INFO][5335] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:30.737622 containerd[2553]: 2026-01-20 06:48:30.695 [INFO][5335] ipam/ipam.go 511: Trying affinity for 192.168.58.0/26 host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:30.737622 containerd[2553]: 2026-01-20 06:48:30.697 [INFO][5335] ipam/ipam.go 158: Attempting to load block cidr=192.168.58.0/26 host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:30.737622 containerd[2553]: 2026-01-20 06:48:30.699 [INFO][5335] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.58.0/26 host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:30.737801 containerd[2553]: 2026-01-20 06:48:30.700 [INFO][5335] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.58.0/26 handle="k8s-pod-network.af918384071aeeef46f29fcc809f5a83a90d4041ae9bff154069816eeff286cf" host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:30.737801 containerd[2553]: 2026-01-20 06:48:30.701 [INFO][5335] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.af918384071aeeef46f29fcc809f5a83a90d4041ae9bff154069816eeff286cf Jan 20 06:48:30.737801 containerd[2553]: 2026-01-20 06:48:30.706 [INFO][5335] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.58.0/26 handle="k8s-pod-network.af918384071aeeef46f29fcc809f5a83a90d4041ae9bff154069816eeff286cf" host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:30.737801 containerd[2553]: 2026-01-20 06:48:30.713 [INFO][5335] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.58.2/26] block=192.168.58.0/26 handle="k8s-pod-network.af918384071aeeef46f29fcc809f5a83a90d4041ae9bff154069816eeff286cf" host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:30.737801 containerd[2553]: 2026-01-20 06:48:30.713 [INFO][5335] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.58.2/26] handle="k8s-pod-network.af918384071aeeef46f29fcc809f5a83a90d4041ae9bff154069816eeff286cf" host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:30.737801 containerd[2553]: 2026-01-20 06:48:30.713 [INFO][5335] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 06:48:30.737801 containerd[2553]: 2026-01-20 06:48:30.713 [INFO][5335] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.58.2/26] IPv6=[] ContainerID="af918384071aeeef46f29fcc809f5a83a90d4041ae9bff154069816eeff286cf" HandleID="k8s-pod-network.af918384071aeeef46f29fcc809f5a83a90d4041ae9bff154069816eeff286cf" Workload="ci--4585.0.0--n--7cf3a16d5e-k8s-calico--apiserver--64f54f655c--9bp6l-eth0" Jan 20 06:48:30.737930 containerd[2553]: 2026-01-20 06:48:30.715 [INFO][5320] cni-plugin/k8s.go 418: Populated endpoint ContainerID="af918384071aeeef46f29fcc809f5a83a90d4041ae9bff154069816eeff286cf" Namespace="calico-apiserver" Pod="calico-apiserver-64f54f655c-9bp6l" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-calico--apiserver--64f54f655c--9bp6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4585.0.0--n--7cf3a16d5e-k8s-calico--apiserver--64f54f655c--9bp6l-eth0", GenerateName:"calico-apiserver-64f54f655c-", Namespace:"calico-apiserver", SelfLink:"", UID:"2309f609-f83d-4aea-8896-a25cb505ea38", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 47, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64f54f655c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4585.0.0-n-7cf3a16d5e", ContainerID:"", Pod:"calico-apiserver-64f54f655c-9bp6l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia33f373d142", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:48:30.737987 containerd[2553]: 2026-01-20 06:48:30.715 [INFO][5320] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.2/32] ContainerID="af918384071aeeef46f29fcc809f5a83a90d4041ae9bff154069816eeff286cf" Namespace="calico-apiserver" Pod="calico-apiserver-64f54f655c-9bp6l" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-calico--apiserver--64f54f655c--9bp6l-eth0" Jan 20 06:48:30.737987 containerd[2553]: 2026-01-20 06:48:30.715 [INFO][5320] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia33f373d142 ContainerID="af918384071aeeef46f29fcc809f5a83a90d4041ae9bff154069816eeff286cf" Namespace="calico-apiserver" Pod="calico-apiserver-64f54f655c-9bp6l" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-calico--apiserver--64f54f655c--9bp6l-eth0" Jan 20 06:48:30.737987 containerd[2553]: 2026-01-20 06:48:30.720 [INFO][5320] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="af918384071aeeef46f29fcc809f5a83a90d4041ae9bff154069816eeff286cf" Namespace="calico-apiserver" Pod="calico-apiserver-64f54f655c-9bp6l" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-calico--apiserver--64f54f655c--9bp6l-eth0" Jan 20 06:48:30.738044 containerd[2553]: 2026-01-20 06:48:30.721 [INFO][5320] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="af918384071aeeef46f29fcc809f5a83a90d4041ae9bff154069816eeff286cf" Namespace="calico-apiserver" Pod="calico-apiserver-64f54f655c-9bp6l" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-calico--apiserver--64f54f655c--9bp6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4585.0.0--n--7cf3a16d5e-k8s-calico--apiserver--64f54f655c--9bp6l-eth0", GenerateName:"calico-apiserver-64f54f655c-", Namespace:"calico-apiserver", SelfLink:"", UID:"2309f609-f83d-4aea-8896-a25cb505ea38", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 47, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64f54f655c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4585.0.0-n-7cf3a16d5e", ContainerID:"af918384071aeeef46f29fcc809f5a83a90d4041ae9bff154069816eeff286cf", Pod:"calico-apiserver-64f54f655c-9bp6l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia33f373d142", MAC:"02:d6:b7:e3:bc:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:48:30.738101 containerd[2553]: 2026-01-20 06:48:30.732 [INFO][5320] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="af918384071aeeef46f29fcc809f5a83a90d4041ae9bff154069816eeff286cf" Namespace="calico-apiserver" Pod="calico-apiserver-64f54f655c-9bp6l" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-calico--apiserver--64f54f655c--9bp6l-eth0" Jan 20 06:48:30.778065 containerd[2553]: time="2026-01-20T06:48:30.777334742Z" level=info msg="connecting to shim af918384071aeeef46f29fcc809f5a83a90d4041ae9bff154069816eeff286cf" address="unix:///run/containerd/s/1a62abd676f60e78b293b88910a34db9bd243bf25f5769ac71a13eca2b42ae5e" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:48:30.802349 systemd[1]: Started cri-containerd-af918384071aeeef46f29fcc809f5a83a90d4041ae9bff154069816eeff286cf.scope - libcontainer container af918384071aeeef46f29fcc809f5a83a90d4041ae9bff154069816eeff286cf. Jan 20 06:48:30.810000 audit: BPF prog-id=229 op=LOAD Jan 20 06:48:30.811000 audit: BPF prog-id=230 op=LOAD Jan 20 06:48:30.811000 audit[5385]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5374 pid=5385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166393138333834303731616565656634366632396663633830396635 Jan 20 06:48:30.811000 audit: BPF prog-id=230 op=UNLOAD Jan 20 06:48:30.811000 audit[5385]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5374 pid=5385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166393138333834303731616565656634366632396663633830396635 Jan 20 06:48:30.811000 audit: BPF prog-id=231 op=LOAD Jan 20 06:48:30.811000 audit[5385]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5374 pid=5385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166393138333834303731616565656634366632396663633830396635 Jan 20 06:48:30.811000 audit: BPF prog-id=232 op=LOAD Jan 20 06:48:30.811000 audit[5385]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5374 pid=5385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166393138333834303731616565656634366632396663633830396635 Jan 20 06:48:30.811000 audit: BPF prog-id=232 op=UNLOAD Jan 20 06:48:30.811000 audit[5385]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5374 pid=5385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166393138333834303731616565656634366632396663633830396635 Jan 20 06:48:30.811000 audit: BPF prog-id=231 op=UNLOAD Jan 20 06:48:30.811000 audit[5385]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5374 pid=5385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166393138333834303731616565656634366632396663633830396635 Jan 20 06:48:30.811000 audit: BPF prog-id=233 op=LOAD Jan 20 06:48:30.811000 audit[5385]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5374 pid=5385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.811000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166393138333834303731616565656634366632396663633830396635 Jan 20 06:48:30.845765 containerd[2553]: time="2026-01-20T06:48:30.845741737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64f54f655c-9bp6l,Uid:2309f609-f83d-4aea-8896-a25cb505ea38,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"af918384071aeeef46f29fcc809f5a83a90d4041ae9bff154069816eeff286cf\"" Jan 20 06:48:30.863000 audit: BPF prog-id=234 op=LOAD Jan 20 06:48:30.863000 audit[5356]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff346f7f60 a2=94 a3=1 items=0 ppid=5130 pid=5356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.863000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:48:30.863000 audit: BPF prog-id=234 op=UNLOAD Jan 20 06:48:30.863000 audit[5356]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff346f7f60 a2=94 a3=1 items=0 ppid=5130 pid=5356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.863000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:48:30.871000 audit: BPF prog-id=235 op=LOAD Jan 20 06:48:30.871000 audit[5356]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff346f7f50 a2=94 a3=4 items=0 ppid=5130 pid=5356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.871000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:48:30.871000 audit: BPF prog-id=235 op=UNLOAD Jan 20 06:48:30.871000 audit[5356]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff346f7f50 a2=0 a3=4 items=0 ppid=5130 pid=5356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.871000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:48:30.871000 audit: BPF prog-id=236 op=LOAD Jan 20 06:48:30.871000 audit[5356]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff346f7db0 a2=94 a3=5 items=0 ppid=5130 pid=5356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.871000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:48:30.871000 audit: BPF prog-id=236 op=UNLOAD Jan 20 06:48:30.871000 audit[5356]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff346f7db0 a2=0 a3=5 items=0 ppid=5130 pid=5356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.871000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:48:30.871000 audit: BPF prog-id=237 op=LOAD Jan 20 06:48:30.871000 audit[5356]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff346f7fd0 a2=94 a3=6 items=0 ppid=5130 pid=5356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.871000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:48:30.872000 audit: BPF prog-id=237 op=UNLOAD Jan 20 06:48:30.872000 audit[5356]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff346f7fd0 a2=0 a3=6 items=0 ppid=5130 pid=5356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.872000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:48:30.872000 audit: BPF prog-id=238 op=LOAD Jan 20 06:48:30.872000 audit[5356]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff346f7780 a2=94 a3=88 items=0 ppid=5130 pid=5356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.872000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:48:30.872000 audit: BPF prog-id=239 op=LOAD Jan 20 06:48:30.872000 audit[5356]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff346f7600 a2=94 a3=2 items=0 ppid=5130 pid=5356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.872000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:48:30.872000 audit: BPF prog-id=239 op=UNLOAD Jan 20 06:48:30.872000 audit[5356]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff346f7630 a2=0 a3=7fff346f7730 items=0 ppid=5130 pid=5356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.872000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:48:30.872000 audit: BPF prog-id=238 op=UNLOAD Jan 20 06:48:30.872000 audit[5356]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=1aa94d10 a2=0 a3=6311bb5dbf0d7e17 items=0 ppid=5130 pid=5356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.872000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 06:48:30.876000 audit: BPF prog-id=225 op=UNLOAD Jan 20 06:48:30.876000 audit[5130]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000caf480 a2=0 a3=0 items=0 ppid=5118 pid=5130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.876000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 20 06:48:30.901479 containerd[2553]: time="2026-01-20T06:48:30.901178498Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:48:30.903911 containerd[2553]: time="2026-01-20T06:48:30.903884937Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 06:48:30.904007 containerd[2553]: time="2026-01-20T06:48:30.903935238Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 06:48:30.904176 kubelet[4005]: E0120 06:48:30.904017 4005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 06:48:30.904176 kubelet[4005]: E0120 06:48:30.904075 4005 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 06:48:30.904426 containerd[2553]: time="2026-01-20T06:48:30.904295270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 06:48:30.904669 kubelet[4005]: E0120 06:48:30.904466 4005 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gsgvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dbbb7d496-gvwhx_calico-system(f5d911e0-cdad-43cd-8151-f2928352d9f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 06:48:30.905689 kubelet[4005]: E0120 06:48:30.905664 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dbbb7d496-gvwhx" podUID="f5d911e0-cdad-43cd-8151-f2928352d9f0" Jan 20 06:48:30.975000 audit[5432]: NETFILTER_CFG table=nat:122 family=2 entries=15 op=nft_register_chain pid=5432 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 06:48:30.975000 audit[5432]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffc943a6a10 a2=0 a3=7ffc943a69fc items=0 ppid=5130 pid=5432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.975000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 06:48:30.977000 audit[5435]: NETFILTER_CFG table=mangle:123 family=2 entries=16 op=nft_register_chain pid=5435 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 06:48:30.977000 audit[5435]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffd868c2d20 a2=0 a3=7ffd868c2d0c items=0 ppid=5130 pid=5435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.977000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 06:48:30.983000 audit[5430]: NETFILTER_CFG table=raw:124 family=2 entries=21 op=nft_register_chain pid=5430 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 06:48:30.983000 audit[5430]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffde9740c40 a2=0 a3=7ffde9740c2c items=0 ppid=5130 pid=5430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.983000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 06:48:30.999000 audit[5436]: NETFILTER_CFG table=filter:125 family=2 entries=94 op=nft_register_chain pid=5436 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 06:48:30.999000 audit[5436]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffc6f6ff920 a2=0 a3=7ffc6f6ff90c items=0 ppid=5130 pid=5436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:30.999000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 06:48:31.026000 audit[5446]: NETFILTER_CFG table=filter:126 family=2 entries=50 op=nft_register_chain pid=5446 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 06:48:31.026000 audit[5446]: SYSCALL arch=c000003e syscall=46 success=yes exit=28208 a0=3 a1=7ffebb8512a0 a2=0 a3=7ffebb85128c items=0 ppid=5130 pid=5446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:31.026000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 06:48:31.150367 containerd[2553]: time="2026-01-20T06:48:31.150330344Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:48:31.153385 containerd[2553]: time="2026-01-20T06:48:31.153326488Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 06:48:31.153432 containerd[2553]: time="2026-01-20T06:48:31.153387016Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 06:48:31.153805 kubelet[4005]: E0120 06:48:31.153732 4005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:48:31.154184 kubelet[4005]: E0120 06:48:31.153842 4005 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:48:31.154184 kubelet[4005]: E0120 06:48:31.153963 4005 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5xdd5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64f54f655c-9bp6l_calico-apiserver(2309f609-f83d-4aea-8896-a25cb505ea38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 06:48:31.155205 kubelet[4005]: E0120 06:48:31.155180 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64f54f655c-9bp6l" podUID="2309f609-f83d-4aea-8896-a25cb505ea38" Jan 20 06:48:31.548203 containerd[2553]: time="2026-01-20T06:48:31.548124435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-c4kj6,Uid:521ce380-6f9e-4050-b213-569fcc069aed,Namespace:calico-system,Attempt:0,}" Jan 20 06:48:31.548419 containerd[2553]: time="2026-01-20T06:48:31.548124483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-569b956df8-vdchn,Uid:6a07f6bf-8507-4691-9e22-698d9549bb6f,Namespace:calico-system,Attempt:0,}" Jan 20 06:48:31.652886 systemd-networkd[2169]: cali7bcc08c8eda: Link UP Jan 20 06:48:31.653747 systemd-networkd[2169]: cali7bcc08c8eda: Gained carrier Jan 20 06:48:31.665831 containerd[2553]: 2026-01-20 06:48:31.601 [INFO][5449] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4585.0.0--n--7cf3a16d5e-k8s-goldmane--666569f655--c4kj6-eth0 goldmane-666569f655- calico-system 521ce380-6f9e-4050-b213-569fcc069aed 835 0 2026-01-20 06:47:57 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4585.0.0-n-7cf3a16d5e goldmane-666569f655-c4kj6 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7bcc08c8eda [] [] }} ContainerID="9a4f1ba4587a39838b111613bf2938c69bc85e5e8f5c752913b725ca3bd2f573" Namespace="calico-system" Pod="goldmane-666569f655-c4kj6" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-goldmane--666569f655--c4kj6-" Jan 20 06:48:31.665831 containerd[2553]: 2026-01-20 06:48:31.602 [INFO][5449] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9a4f1ba4587a39838b111613bf2938c69bc85e5e8f5c752913b725ca3bd2f573" Namespace="calico-system" Pod="goldmane-666569f655-c4kj6" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-goldmane--666569f655--c4kj6-eth0" Jan 20 06:48:31.665831 containerd[2553]: 2026-01-20 06:48:31.625 [INFO][5475] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9a4f1ba4587a39838b111613bf2938c69bc85e5e8f5c752913b725ca3bd2f573" HandleID="k8s-pod-network.9a4f1ba4587a39838b111613bf2938c69bc85e5e8f5c752913b725ca3bd2f573" Workload="ci--4585.0.0--n--7cf3a16d5e-k8s-goldmane--666569f655--c4kj6-eth0" Jan 20 06:48:31.666567 containerd[2553]: 2026-01-20 06:48:31.626 [INFO][5475] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9a4f1ba4587a39838b111613bf2938c69bc85e5e8f5c752913b725ca3bd2f573" HandleID="k8s-pod-network.9a4f1ba4587a39838b111613bf2938c69bc85e5e8f5c752913b725ca3bd2f573" Workload="ci--4585.0.0--n--7cf3a16d5e-k8s-goldmane--666569f655--c4kj6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad3a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4585.0.0-n-7cf3a16d5e", "pod":"goldmane-666569f655-c4kj6", "timestamp":"2026-01-20 06:48:31.625959518 +0000 UTC"}, Hostname:"ci-4585.0.0-n-7cf3a16d5e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 06:48:31.666567 containerd[2553]: 2026-01-20 06:48:31.626 [INFO][5475] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 06:48:31.666567 containerd[2553]: 2026-01-20 06:48:31.626 [INFO][5475] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 06:48:31.666567 containerd[2553]: 2026-01-20 06:48:31.626 [INFO][5475] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4585.0.0-n-7cf3a16d5e' Jan 20 06:48:31.666567 containerd[2553]: 2026-01-20 06:48:31.630 [INFO][5475] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9a4f1ba4587a39838b111613bf2938c69bc85e5e8f5c752913b725ca3bd2f573" host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:31.666567 containerd[2553]: 2026-01-20 06:48:31.633 [INFO][5475] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:31.666567 containerd[2553]: 2026-01-20 06:48:31.635 [INFO][5475] ipam/ipam.go 511: Trying affinity for 192.168.58.0/26 host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:31.666567 containerd[2553]: 2026-01-20 06:48:31.636 [INFO][5475] ipam/ipam.go 158: Attempting to load block cidr=192.168.58.0/26 host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:31.666567 containerd[2553]: 2026-01-20 06:48:31.637 [INFO][5475] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.58.0/26 host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:31.666773 containerd[2553]: 2026-01-20 06:48:31.638 [INFO][5475] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.58.0/26 handle="k8s-pod-network.9a4f1ba4587a39838b111613bf2938c69bc85e5e8f5c752913b725ca3bd2f573" host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:31.666773 containerd[2553]: 2026-01-20 06:48:31.638 [INFO][5475] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9a4f1ba4587a39838b111613bf2938c69bc85e5e8f5c752913b725ca3bd2f573 Jan 20 06:48:31.666773 containerd[2553]: 2026-01-20 06:48:31.644 [INFO][5475] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.58.0/26 handle="k8s-pod-network.9a4f1ba4587a39838b111613bf2938c69bc85e5e8f5c752913b725ca3bd2f573" host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:31.666773 containerd[2553]: 2026-01-20 06:48:31.648 [INFO][5475] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.58.3/26] block=192.168.58.0/26 handle="k8s-pod-network.9a4f1ba4587a39838b111613bf2938c69bc85e5e8f5c752913b725ca3bd2f573" host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:31.666773 containerd[2553]: 2026-01-20 06:48:31.648 [INFO][5475] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.58.3/26] handle="k8s-pod-network.9a4f1ba4587a39838b111613bf2938c69bc85e5e8f5c752913b725ca3bd2f573" host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:31.666773 containerd[2553]: 2026-01-20 06:48:31.648 [INFO][5475] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 06:48:31.666773 containerd[2553]: 2026-01-20 06:48:31.648 [INFO][5475] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.58.3/26] IPv6=[] ContainerID="9a4f1ba4587a39838b111613bf2938c69bc85e5e8f5c752913b725ca3bd2f573" HandleID="k8s-pod-network.9a4f1ba4587a39838b111613bf2938c69bc85e5e8f5c752913b725ca3bd2f573" Workload="ci--4585.0.0--n--7cf3a16d5e-k8s-goldmane--666569f655--c4kj6-eth0" Jan 20 06:48:31.666911 containerd[2553]: 2026-01-20 06:48:31.649 [INFO][5449] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9a4f1ba4587a39838b111613bf2938c69bc85e5e8f5c752913b725ca3bd2f573" Namespace="calico-system" Pod="goldmane-666569f655-c4kj6" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-goldmane--666569f655--c4kj6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4585.0.0--n--7cf3a16d5e-k8s-goldmane--666569f655--c4kj6-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"521ce380-6f9e-4050-b213-569fcc069aed", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 47, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4585.0.0-n-7cf3a16d5e", ContainerID:"", Pod:"goldmane-666569f655-c4kj6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.58.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7bcc08c8eda", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:48:31.666911 containerd[2553]: 2026-01-20 06:48:31.649 [INFO][5449] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.3/32] ContainerID="9a4f1ba4587a39838b111613bf2938c69bc85e5e8f5c752913b725ca3bd2f573" Namespace="calico-system" Pod="goldmane-666569f655-c4kj6" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-goldmane--666569f655--c4kj6-eth0" Jan 20 06:48:31.667076 containerd[2553]: 2026-01-20 06:48:31.649 [INFO][5449] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7bcc08c8eda ContainerID="9a4f1ba4587a39838b111613bf2938c69bc85e5e8f5c752913b725ca3bd2f573" Namespace="calico-system" Pod="goldmane-666569f655-c4kj6" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-goldmane--666569f655--c4kj6-eth0" Jan 20 06:48:31.667076 containerd[2553]: 2026-01-20 06:48:31.654 [INFO][5449] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9a4f1ba4587a39838b111613bf2938c69bc85e5e8f5c752913b725ca3bd2f573" Namespace="calico-system" Pod="goldmane-666569f655-c4kj6" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-goldmane--666569f655--c4kj6-eth0" Jan 20 06:48:31.667124 containerd[2553]: 2026-01-20 06:48:31.654 [INFO][5449] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9a4f1ba4587a39838b111613bf2938c69bc85e5e8f5c752913b725ca3bd2f573" Namespace="calico-system" Pod="goldmane-666569f655-c4kj6" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-goldmane--666569f655--c4kj6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4585.0.0--n--7cf3a16d5e-k8s-goldmane--666569f655--c4kj6-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"521ce380-6f9e-4050-b213-569fcc069aed", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 47, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4585.0.0-n-7cf3a16d5e", ContainerID:"9a4f1ba4587a39838b111613bf2938c69bc85e5e8f5c752913b725ca3bd2f573", Pod:"goldmane-666569f655-c4kj6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.58.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7bcc08c8eda", MAC:"be:68:ac:14:1d:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:48:31.667182 containerd[2553]: 2026-01-20 06:48:31.664 [INFO][5449] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9a4f1ba4587a39838b111613bf2938c69bc85e5e8f5c752913b725ca3bd2f573" Namespace="calico-system" Pod="goldmane-666569f655-c4kj6" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-goldmane--666569f655--c4kj6-eth0" Jan 20 06:48:31.671287 kubelet[4005]: E0120 06:48:31.671249 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64f54f655c-9bp6l" podUID="2309f609-f83d-4aea-8896-a25cb505ea38" Jan 20 06:48:31.671811 kubelet[4005]: E0120 06:48:31.671778 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dbbb7d496-gvwhx" podUID="f5d911e0-cdad-43cd-8151-f2928352d9f0" Jan 20 06:48:31.679000 audit[5497]: NETFILTER_CFG table=filter:127 family=2 entries=48 op=nft_register_chain pid=5497 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 06:48:31.679000 audit[5497]: SYSCALL arch=c000003e syscall=46 success=yes exit=26368 a0=3 a1=7fff464d9b30 a2=0 a3=7fff464d9b1c items=0 ppid=5130 pid=5497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:31.679000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 06:48:31.707000 audit[5499]: NETFILTER_CFG table=filter:128 family=2 entries=20 op=nft_register_rule pid=5499 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:48:31.707000 audit[5499]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc2b0603a0 a2=0 a3=7ffc2b06038c items=0 ppid=4134 pid=5499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:31.707000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:48:31.715000 audit[5499]: NETFILTER_CFG table=nat:129 family=2 entries=14 op=nft_register_rule pid=5499 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:48:31.715000 audit[5499]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc2b0603a0 a2=0 a3=0 items=0 ppid=4134 pid=5499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:31.715000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:48:31.725000 audit[5501]: NETFILTER_CFG table=filter:130 family=2 entries=20 op=nft_register_rule pid=5501 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:48:31.725000 audit[5501]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffed34c2360 a2=0 a3=7ffed34c234c items=0 ppid=4134 pid=5501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:31.725000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:48:31.729000 audit[5501]: NETFILTER_CFG table=nat:131 family=2 entries=14 op=nft_register_rule pid=5501 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:48:31.729000 audit[5501]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffed34c2360 a2=0 a3=0 items=0 ppid=4134 pid=5501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:31.729000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:48:31.746197 containerd[2553]: time="2026-01-20T06:48:31.744988620Z" level=info msg="connecting to shim 9a4f1ba4587a39838b111613bf2938c69bc85e5e8f5c752913b725ca3bd2f573" address="unix:///run/containerd/s/3b2573358acab4a21739c4f94c264d4ebcdc72809241b5b1a97a4440f0e68e45" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:48:31.768403 systemd-networkd[2169]: calidff034db9ab: Link UP Jan 20 06:48:31.768985 systemd-networkd[2169]: calidff034db9ab: Gained carrier Jan 20 06:48:31.769376 systemd[1]: Started cri-containerd-9a4f1ba4587a39838b111613bf2938c69bc85e5e8f5c752913b725ca3bd2f573.scope - libcontainer container 9a4f1ba4587a39838b111613bf2938c69bc85e5e8f5c752913b725ca3bd2f573. Jan 20 06:48:31.781000 audit: BPF prog-id=240 op=LOAD Jan 20 06:48:31.782000 audit: BPF prog-id=241 op=LOAD Jan 20 06:48:31.782000 audit[5521]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5511 pid=5521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:31.782000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961346631626134353837613339383338623131313631336266323933 Jan 20 06:48:31.783000 audit: BPF prog-id=241 op=UNLOAD Jan 20 06:48:31.783000 audit[5521]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5511 pid=5521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:31.783000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961346631626134353837613339383338623131313631336266323933 Jan 20 06:48:31.783000 audit: BPF prog-id=242 op=LOAD Jan 20 06:48:31.783000 audit[5521]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5511 pid=5521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:31.783000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961346631626134353837613339383338623131313631336266323933 Jan 20 06:48:31.783000 audit: BPF prog-id=243 op=LOAD Jan 20 06:48:31.783000 audit[5521]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5511 pid=5521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:31.783000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961346631626134353837613339383338623131313631336266323933 Jan 20 06:48:31.783000 audit: BPF prog-id=243 op=UNLOAD Jan 20 06:48:31.783000 audit[5521]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5511 pid=5521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:31.783000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961346631626134353837613339383338623131313631336266323933 Jan 20 06:48:31.783000 audit: BPF prog-id=242 op=UNLOAD Jan 20 06:48:31.783000 audit[5521]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5511 pid=5521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:31.783000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961346631626134353837613339383338623131313631336266323933 Jan 20 06:48:31.785687 containerd[2553]: 2026-01-20 06:48:31.605 [INFO][5460] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4585.0.0--n--7cf3a16d5e-k8s-calico--kube--controllers--569b956df8--vdchn-eth0 calico-kube-controllers-569b956df8- calico-system 6a07f6bf-8507-4691-9e22-698d9549bb6f 838 0 2026-01-20 06:47:59 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:569b956df8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4585.0.0-n-7cf3a16d5e calico-kube-controllers-569b956df8-vdchn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calidff034db9ab [] [] }} ContainerID="a8e3bd4a6fef7951b64730358d8e56e58e758780327d3fe5de5e6724009e7a43" Namespace="calico-system" Pod="calico-kube-controllers-569b956df8-vdchn" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-calico--kube--controllers--569b956df8--vdchn-" Jan 20 06:48:31.785687 containerd[2553]: 2026-01-20 06:48:31.605 [INFO][5460] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a8e3bd4a6fef7951b64730358d8e56e58e758780327d3fe5de5e6724009e7a43" Namespace="calico-system" Pod="calico-kube-controllers-569b956df8-vdchn" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-calico--kube--controllers--569b956df8--vdchn-eth0" Jan 20 06:48:31.785687 containerd[2553]: 2026-01-20 06:48:31.630 [INFO][5480] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a8e3bd4a6fef7951b64730358d8e56e58e758780327d3fe5de5e6724009e7a43" HandleID="k8s-pod-network.a8e3bd4a6fef7951b64730358d8e56e58e758780327d3fe5de5e6724009e7a43" Workload="ci--4585.0.0--n--7cf3a16d5e-k8s-calico--kube--controllers--569b956df8--vdchn-eth0" Jan 20 06:48:31.785801 containerd[2553]: 2026-01-20 06:48:31.630 [INFO][5480] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a8e3bd4a6fef7951b64730358d8e56e58e758780327d3fe5de5e6724009e7a43" HandleID="k8s-pod-network.a8e3bd4a6fef7951b64730358d8e56e58e758780327d3fe5de5e6724009e7a43" Workload="ci--4585.0.0--n--7cf3a16d5e-k8s-calico--kube--controllers--569b956df8--vdchn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad370), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4585.0.0-n-7cf3a16d5e", "pod":"calico-kube-controllers-569b956df8-vdchn", "timestamp":"2026-01-20 06:48:31.630506555 +0000 UTC"}, Hostname:"ci-4585.0.0-n-7cf3a16d5e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 06:48:31.785801 containerd[2553]: 2026-01-20 06:48:31.630 [INFO][5480] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 06:48:31.785801 containerd[2553]: 2026-01-20 06:48:31.648 [INFO][5480] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 06:48:31.785801 containerd[2553]: 2026-01-20 06:48:31.648 [INFO][5480] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4585.0.0-n-7cf3a16d5e' Jan 20 06:48:31.785801 containerd[2553]: 2026-01-20 06:48:31.731 [INFO][5480] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a8e3bd4a6fef7951b64730358d8e56e58e758780327d3fe5de5e6724009e7a43" host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:31.785801 containerd[2553]: 2026-01-20 06:48:31.734 [INFO][5480] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:31.785801 containerd[2553]: 2026-01-20 06:48:31.740 [INFO][5480] ipam/ipam.go 511: Trying affinity for 192.168.58.0/26 host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:31.785801 containerd[2553]: 2026-01-20 06:48:31.742 [INFO][5480] ipam/ipam.go 158: Attempting to load block cidr=192.168.58.0/26 host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:31.785801 containerd[2553]: 2026-01-20 06:48:31.745 [INFO][5480] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.58.0/26 host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:31.785988 containerd[2553]: 2026-01-20 06:48:31.745 [INFO][5480] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.58.0/26 handle="k8s-pod-network.a8e3bd4a6fef7951b64730358d8e56e58e758780327d3fe5de5e6724009e7a43" host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:31.785988 containerd[2553]: 2026-01-20 06:48:31.746 [INFO][5480] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a8e3bd4a6fef7951b64730358d8e56e58e758780327d3fe5de5e6724009e7a43 Jan 20 06:48:31.785988 containerd[2553]: 2026-01-20 06:48:31.750 [INFO][5480] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.58.0/26 handle="k8s-pod-network.a8e3bd4a6fef7951b64730358d8e56e58e758780327d3fe5de5e6724009e7a43" host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:31.785988 containerd[2553]: 2026-01-20 06:48:31.762 [INFO][5480] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.58.4/26] block=192.168.58.0/26 handle="k8s-pod-network.a8e3bd4a6fef7951b64730358d8e56e58e758780327d3fe5de5e6724009e7a43" host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:31.785988 containerd[2553]: 2026-01-20 06:48:31.762 [INFO][5480] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.58.4/26] handle="k8s-pod-network.a8e3bd4a6fef7951b64730358d8e56e58e758780327d3fe5de5e6724009e7a43" host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:31.785988 containerd[2553]: 2026-01-20 06:48:31.763 [INFO][5480] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 06:48:31.785988 containerd[2553]: 2026-01-20 06:48:31.763 [INFO][5480] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.58.4/26] IPv6=[] ContainerID="a8e3bd4a6fef7951b64730358d8e56e58e758780327d3fe5de5e6724009e7a43" HandleID="k8s-pod-network.a8e3bd4a6fef7951b64730358d8e56e58e758780327d3fe5de5e6724009e7a43" Workload="ci--4585.0.0--n--7cf3a16d5e-k8s-calico--kube--controllers--569b956df8--vdchn-eth0" Jan 20 06:48:31.786113 containerd[2553]: 2026-01-20 06:48:31.764 [INFO][5460] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a8e3bd4a6fef7951b64730358d8e56e58e758780327d3fe5de5e6724009e7a43" Namespace="calico-system" Pod="calico-kube-controllers-569b956df8-vdchn" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-calico--kube--controllers--569b956df8--vdchn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4585.0.0--n--7cf3a16d5e-k8s-calico--kube--controllers--569b956df8--vdchn-eth0", GenerateName:"calico-kube-controllers-569b956df8-", Namespace:"calico-system", SelfLink:"", UID:"6a07f6bf-8507-4691-9e22-698d9549bb6f", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 47, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"569b956df8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4585.0.0-n-7cf3a16d5e", ContainerID:"", Pod:"calico-kube-controllers-569b956df8-vdchn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.58.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidff034db9ab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:48:31.786173 containerd[2553]: 2026-01-20 06:48:31.764 [INFO][5460] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.4/32] ContainerID="a8e3bd4a6fef7951b64730358d8e56e58e758780327d3fe5de5e6724009e7a43" Namespace="calico-system" Pod="calico-kube-controllers-569b956df8-vdchn" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-calico--kube--controllers--569b956df8--vdchn-eth0" Jan 20 06:48:31.786173 containerd[2553]: 2026-01-20 06:48:31.764 [INFO][5460] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidff034db9ab ContainerID="a8e3bd4a6fef7951b64730358d8e56e58e758780327d3fe5de5e6724009e7a43" Namespace="calico-system" Pod="calico-kube-controllers-569b956df8-vdchn" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-calico--kube--controllers--569b956df8--vdchn-eth0" Jan 20 06:48:31.786173 containerd[2553]: 2026-01-20 06:48:31.768 [INFO][5460] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a8e3bd4a6fef7951b64730358d8e56e58e758780327d3fe5de5e6724009e7a43" Namespace="calico-system" Pod="calico-kube-controllers-569b956df8-vdchn" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-calico--kube--controllers--569b956df8--vdchn-eth0" Jan 20 06:48:31.786245 containerd[2553]: 2026-01-20 06:48:31.768 [INFO][5460] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a8e3bd4a6fef7951b64730358d8e56e58e758780327d3fe5de5e6724009e7a43" Namespace="calico-system" Pod="calico-kube-controllers-569b956df8-vdchn" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-calico--kube--controllers--569b956df8--vdchn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4585.0.0--n--7cf3a16d5e-k8s-calico--kube--controllers--569b956df8--vdchn-eth0", GenerateName:"calico-kube-controllers-569b956df8-", Namespace:"calico-system", SelfLink:"", UID:"6a07f6bf-8507-4691-9e22-698d9549bb6f", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 47, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"569b956df8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4585.0.0-n-7cf3a16d5e", ContainerID:"a8e3bd4a6fef7951b64730358d8e56e58e758780327d3fe5de5e6724009e7a43", Pod:"calico-kube-controllers-569b956df8-vdchn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.58.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidff034db9ab", MAC:"9e:93:1c:a5:61:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:48:31.786298 containerd[2553]: 2026-01-20 06:48:31.783 [INFO][5460] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a8e3bd4a6fef7951b64730358d8e56e58e758780327d3fe5de5e6724009e7a43" Namespace="calico-system" Pod="calico-kube-controllers-569b956df8-vdchn" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-calico--kube--controllers--569b956df8--vdchn-eth0" Jan 20 06:48:31.784000 audit: BPF prog-id=244 op=LOAD Jan 20 06:48:31.784000 audit[5521]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5511 pid=5521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:31.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961346631626134353837613339383338623131313631336266323933 Jan 20 06:48:31.805000 audit[5547]: NETFILTER_CFG table=filter:132 family=2 entries=50 op=nft_register_chain pid=5547 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 06:48:31.805000 audit[5547]: SYSCALL arch=c000003e syscall=46 success=yes exit=24804 a0=3 a1=7ffd62c484c0 a2=0 a3=7ffd62c484ac items=0 ppid=5130 pid=5547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:31.805000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 06:48:31.825155 containerd[2553]: time="2026-01-20T06:48:31.825106908Z" level=info msg="connecting to shim a8e3bd4a6fef7951b64730358d8e56e58e758780327d3fe5de5e6724009e7a43" address="unix:///run/containerd/s/b597fde40c7d7b08d56efa039c87bd30310db927cf0c6f85ade5c2e18a412608" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:48:31.830952 containerd[2553]: time="2026-01-20T06:48:31.830927932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-c4kj6,Uid:521ce380-6f9e-4050-b213-569fcc069aed,Namespace:calico-system,Attempt:0,} returns sandbox id \"9a4f1ba4587a39838b111613bf2938c69bc85e5e8f5c752913b725ca3bd2f573\"" Jan 20 06:48:31.832600 containerd[2553]: time="2026-01-20T06:48:31.832494030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 06:48:31.845358 systemd[1]: Started cri-containerd-a8e3bd4a6fef7951b64730358d8e56e58e758780327d3fe5de5e6724009e7a43.scope - libcontainer container a8e3bd4a6fef7951b64730358d8e56e58e758780327d3fe5de5e6724009e7a43. Jan 20 06:48:31.851000 audit: BPF prog-id=245 op=LOAD Jan 20 06:48:31.851000 audit: BPF prog-id=246 op=LOAD Jan 20 06:48:31.851000 audit[5574]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5563 pid=5574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:31.851000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138653362643461366665663739353162363437333033353864386535 Jan 20 06:48:31.851000 audit: BPF prog-id=246 op=UNLOAD Jan 20 06:48:31.851000 audit[5574]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5563 pid=5574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:31.851000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138653362643461366665663739353162363437333033353864386535 Jan 20 06:48:31.851000 audit: BPF prog-id=247 op=LOAD Jan 20 06:48:31.851000 audit[5574]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5563 pid=5574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:31.851000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138653362643461366665663739353162363437333033353864386535 Jan 20 06:48:31.852000 audit: BPF prog-id=248 op=LOAD Jan 20 06:48:31.852000 audit[5574]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5563 pid=5574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:31.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138653362643461366665663739353162363437333033353864386535 Jan 20 06:48:31.852000 audit: BPF prog-id=248 op=UNLOAD Jan 20 06:48:31.852000 audit[5574]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5563 pid=5574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:31.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138653362643461366665663739353162363437333033353864386535 Jan 20 06:48:31.852000 audit: BPF prog-id=247 op=UNLOAD Jan 20 06:48:31.852000 audit[5574]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5563 pid=5574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:31.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138653362643461366665663739353162363437333033353864386535 Jan 20 06:48:31.852000 audit: BPF prog-id=249 op=LOAD Jan 20 06:48:31.852000 audit[5574]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5563 pid=5574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:31.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138653362643461366665663739353162363437333033353864386535 Jan 20 06:48:31.881384 systemd-networkd[2169]: calid91778ae9b9: Gained IPv6LL Jan 20 06:48:31.883638 containerd[2553]: time="2026-01-20T06:48:31.883618014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-569b956df8-vdchn,Uid:6a07f6bf-8507-4691-9e22-698d9549bb6f,Namespace:calico-system,Attempt:0,} returns sandbox id \"a8e3bd4a6fef7951b64730358d8e56e58e758780327d3fe5de5e6724009e7a43\"" Jan 20 06:48:31.945345 systemd-networkd[2169]: vxlan.calico: Gained IPv6LL Jan 20 06:48:32.009322 systemd-networkd[2169]: calia33f373d142: Gained IPv6LL Jan 20 06:48:32.075760 containerd[2553]: time="2026-01-20T06:48:32.075709939Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:48:32.078409 containerd[2553]: time="2026-01-20T06:48:32.078384114Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 06:48:32.078409 containerd[2553]: time="2026-01-20T06:48:32.078426517Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 06:48:32.078539 kubelet[4005]: E0120 06:48:32.078518 4005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 06:48:32.078725 kubelet[4005]: E0120 06:48:32.078554 4005 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 06:48:32.078911 kubelet[4005]: E0120 06:48:32.078712 4005 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v7z5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-c4kj6_calico-system(521ce380-6f9e-4050-b213-569fcc069aed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 06:48:32.079121 containerd[2553]: time="2026-01-20T06:48:32.079001630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 06:48:32.080043 kubelet[4005]: E0120 06:48:32.080002 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c4kj6" podUID="521ce380-6f9e-4050-b213-569fcc069aed" Jan 20 06:48:32.322471 containerd[2553]: time="2026-01-20T06:48:32.322450706Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:48:32.325052 containerd[2553]: time="2026-01-20T06:48:32.325014742Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 06:48:32.325103 containerd[2553]: time="2026-01-20T06:48:32.325079313Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 06:48:32.325235 kubelet[4005]: E0120 06:48:32.325193 4005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 06:48:32.325272 kubelet[4005]: E0120 06:48:32.325239 4005 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 06:48:32.325453 kubelet[4005]: E0120 06:48:32.325391 4005 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sfbl6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-569b956df8-vdchn_calico-system(6a07f6bf-8507-4691-9e22-698d9549bb6f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 06:48:32.326612 kubelet[4005]: E0120 06:48:32.326540 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-569b956df8-vdchn" podUID="6a07f6bf-8507-4691-9e22-698d9549bb6f" Jan 20 06:48:32.549858 containerd[2553]: time="2026-01-20T06:48:32.549724297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4zr2j,Uid:02cc045c-02cb-4e4c-b380-c45b5c3edaed,Namespace:kube-system,Attempt:0,}" Jan 20 06:48:32.549858 containerd[2553]: time="2026-01-20T06:48:32.549724374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2tllm,Uid:7274afc5-8df6-4ee4-b52e-bf6155c0f0e9,Namespace:kube-system,Attempt:0,}" Jan 20 06:48:32.655098 systemd-networkd[2169]: calia1ea1637bd1: Link UP Jan 20 06:48:32.655716 systemd-networkd[2169]: calia1ea1637bd1: Gained carrier Jan 20 06:48:32.668088 containerd[2553]: 2026-01-20 06:48:32.602 [INFO][5606] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4585.0.0--n--7cf3a16d5e-k8s-coredns--668d6bf9bc--2tllm-eth0 coredns-668d6bf9bc- kube-system 7274afc5-8df6-4ee4-b52e-bf6155c0f0e9 837 0 2026-01-20 06:47:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4585.0.0-n-7cf3a16d5e coredns-668d6bf9bc-2tllm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia1ea1637bd1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0428a5d3fef1f3c26d24ba1335161d6318a68b896894cabef85436b19e1213c2" Namespace="kube-system" Pod="coredns-668d6bf9bc-2tllm" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-coredns--668d6bf9bc--2tllm-" Jan 20 06:48:32.668088 containerd[2553]: 2026-01-20 06:48:32.602 [INFO][5606] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0428a5d3fef1f3c26d24ba1335161d6318a68b896894cabef85436b19e1213c2" Namespace="kube-system" Pod="coredns-668d6bf9bc-2tllm" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-coredns--668d6bf9bc--2tllm-eth0" Jan 20 06:48:32.668088 containerd[2553]: 2026-01-20 06:48:32.624 [INFO][5630] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0428a5d3fef1f3c26d24ba1335161d6318a68b896894cabef85436b19e1213c2" HandleID="k8s-pod-network.0428a5d3fef1f3c26d24ba1335161d6318a68b896894cabef85436b19e1213c2" Workload="ci--4585.0.0--n--7cf3a16d5e-k8s-coredns--668d6bf9bc--2tllm-eth0" Jan 20 06:48:32.668246 containerd[2553]: 2026-01-20 06:48:32.624 [INFO][5630] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0428a5d3fef1f3c26d24ba1335161d6318a68b896894cabef85436b19e1213c2" HandleID="k8s-pod-network.0428a5d3fef1f3c26d24ba1335161d6318a68b896894cabef85436b19e1213c2" Workload="ci--4585.0.0--n--7cf3a16d5e-k8s-coredns--668d6bf9bc--2tllm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d51d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4585.0.0-n-7cf3a16d5e", "pod":"coredns-668d6bf9bc-2tllm", "timestamp":"2026-01-20 06:48:32.624545969 +0000 UTC"}, Hostname:"ci-4585.0.0-n-7cf3a16d5e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 06:48:32.668246 containerd[2553]: 2026-01-20 06:48:32.624 [INFO][5630] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 06:48:32.668246 containerd[2553]: 2026-01-20 06:48:32.624 [INFO][5630] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 06:48:32.668246 containerd[2553]: 2026-01-20 06:48:32.624 [INFO][5630] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4585.0.0-n-7cf3a16d5e' Jan 20 06:48:32.668246 containerd[2553]: 2026-01-20 06:48:32.628 [INFO][5630] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0428a5d3fef1f3c26d24ba1335161d6318a68b896894cabef85436b19e1213c2" host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:32.668246 containerd[2553]: 2026-01-20 06:48:32.631 [INFO][5630] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:32.668246 containerd[2553]: 2026-01-20 06:48:32.633 [INFO][5630] ipam/ipam.go 511: Trying affinity for 192.168.58.0/26 host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:32.668246 containerd[2553]: 2026-01-20 06:48:32.634 [INFO][5630] ipam/ipam.go 158: Attempting to load block cidr=192.168.58.0/26 host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:32.668246 containerd[2553]: 2026-01-20 06:48:32.636 [INFO][5630] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.58.0/26 host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:32.668622 containerd[2553]: 2026-01-20 06:48:32.636 [INFO][5630] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.58.0/26 handle="k8s-pod-network.0428a5d3fef1f3c26d24ba1335161d6318a68b896894cabef85436b19e1213c2" host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:32.668622 containerd[2553]: 2026-01-20 06:48:32.637 [INFO][5630] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0428a5d3fef1f3c26d24ba1335161d6318a68b896894cabef85436b19e1213c2 Jan 20 06:48:32.668622 containerd[2553]: 2026-01-20 06:48:32.640 [INFO][5630] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.58.0/26 handle="k8s-pod-network.0428a5d3fef1f3c26d24ba1335161d6318a68b896894cabef85436b19e1213c2" host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:32.668622 containerd[2553]: 2026-01-20 06:48:32.648 [INFO][5630] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.58.5/26] block=192.168.58.0/26 handle="k8s-pod-network.0428a5d3fef1f3c26d24ba1335161d6318a68b896894cabef85436b19e1213c2" host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:32.668622 containerd[2553]: 2026-01-20 06:48:32.648 [INFO][5630] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.58.5/26] handle="k8s-pod-network.0428a5d3fef1f3c26d24ba1335161d6318a68b896894cabef85436b19e1213c2" host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:32.668622 containerd[2553]: 2026-01-20 06:48:32.648 [INFO][5630] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 06:48:32.668622 containerd[2553]: 2026-01-20 06:48:32.648 [INFO][5630] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.58.5/26] IPv6=[] ContainerID="0428a5d3fef1f3c26d24ba1335161d6318a68b896894cabef85436b19e1213c2" HandleID="k8s-pod-network.0428a5d3fef1f3c26d24ba1335161d6318a68b896894cabef85436b19e1213c2" Workload="ci--4585.0.0--n--7cf3a16d5e-k8s-coredns--668d6bf9bc--2tllm-eth0" Jan 20 06:48:32.668874 containerd[2553]: 2026-01-20 06:48:32.650 [INFO][5606] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0428a5d3fef1f3c26d24ba1335161d6318a68b896894cabef85436b19e1213c2" Namespace="kube-system" Pod="coredns-668d6bf9bc-2tllm" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-coredns--668d6bf9bc--2tllm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4585.0.0--n--7cf3a16d5e-k8s-coredns--668d6bf9bc--2tllm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7274afc5-8df6-4ee4-b52e-bf6155c0f0e9", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 47, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4585.0.0-n-7cf3a16d5e", ContainerID:"", Pod:"coredns-668d6bf9bc-2tllm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia1ea1637bd1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:48:32.668874 containerd[2553]: 2026-01-20 06:48:32.650 [INFO][5606] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.5/32] ContainerID="0428a5d3fef1f3c26d24ba1335161d6318a68b896894cabef85436b19e1213c2" Namespace="kube-system" Pod="coredns-668d6bf9bc-2tllm" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-coredns--668d6bf9bc--2tllm-eth0" Jan 20 06:48:32.668874 containerd[2553]: 2026-01-20 06:48:32.650 [INFO][5606] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia1ea1637bd1 ContainerID="0428a5d3fef1f3c26d24ba1335161d6318a68b896894cabef85436b19e1213c2" Namespace="kube-system" Pod="coredns-668d6bf9bc-2tllm" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-coredns--668d6bf9bc--2tllm-eth0" Jan 20 06:48:32.668874 containerd[2553]: 2026-01-20 06:48:32.655 [INFO][5606] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0428a5d3fef1f3c26d24ba1335161d6318a68b896894cabef85436b19e1213c2" Namespace="kube-system" Pod="coredns-668d6bf9bc-2tllm" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-coredns--668d6bf9bc--2tllm-eth0" Jan 20 06:48:32.668874 containerd[2553]: 2026-01-20 06:48:32.655 [INFO][5606] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0428a5d3fef1f3c26d24ba1335161d6318a68b896894cabef85436b19e1213c2" Namespace="kube-system" Pod="coredns-668d6bf9bc-2tllm" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-coredns--668d6bf9bc--2tllm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4585.0.0--n--7cf3a16d5e-k8s-coredns--668d6bf9bc--2tllm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7274afc5-8df6-4ee4-b52e-bf6155c0f0e9", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 47, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4585.0.0-n-7cf3a16d5e", ContainerID:"0428a5d3fef1f3c26d24ba1335161d6318a68b896894cabef85436b19e1213c2", Pod:"coredns-668d6bf9bc-2tllm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia1ea1637bd1", MAC:"02:32:2a:ff:c4:95", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:48:32.668874 containerd[2553]: 2026-01-20 06:48:32.666 [INFO][5606] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0428a5d3fef1f3c26d24ba1335161d6318a68b896894cabef85436b19e1213c2" Namespace="kube-system" Pod="coredns-668d6bf9bc-2tllm" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-coredns--668d6bf9bc--2tllm-eth0" Jan 20 06:48:32.674594 kubelet[4005]: E0120 06:48:32.674575 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-569b956df8-vdchn" podUID="6a07f6bf-8507-4691-9e22-698d9549bb6f" Jan 20 06:48:32.677247 kubelet[4005]: E0120 06:48:32.677177 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c4kj6" podUID="521ce380-6f9e-4050-b213-569fcc069aed" Jan 20 06:48:32.677489 kubelet[4005]: E0120 06:48:32.677316 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64f54f655c-9bp6l" podUID="2309f609-f83d-4aea-8896-a25cb505ea38" Jan 20 06:48:32.685000 audit[5649]: NETFILTER_CFG table=filter:133 family=2 entries=56 op=nft_register_chain pid=5649 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 06:48:32.685000 audit[5649]: SYSCALL arch=c000003e syscall=46 success=yes exit=27764 a0=3 a1=7fffbf0464f0 a2=0 a3=7fffbf0464dc items=0 ppid=5130 pid=5649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:32.685000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 06:48:32.712626 containerd[2553]: time="2026-01-20T06:48:32.712578368Z" level=info msg="connecting to shim 0428a5d3fef1f3c26d24ba1335161d6318a68b896894cabef85436b19e1213c2" address="unix:///run/containerd/s/f61fd2c675ee552bf57eea90ca52b2c7e673538483a79376c92c40f8f2e5acd0" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:48:32.742000 audit[5682]: NETFILTER_CFG table=filter:134 family=2 entries=20 op=nft_register_rule pid=5682 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:48:32.742000 audit[5682]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe4dbdb7e0 a2=0 a3=7ffe4dbdb7cc items=0 ppid=4134 pid=5682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:32.742000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:48:32.748318 systemd[1]: Started cri-containerd-0428a5d3fef1f3c26d24ba1335161d6318a68b896894cabef85436b19e1213c2.scope - libcontainer container 0428a5d3fef1f3c26d24ba1335161d6318a68b896894cabef85436b19e1213c2. Jan 20 06:48:32.747000 audit[5682]: NETFILTER_CFG table=nat:135 family=2 entries=14 op=nft_register_rule pid=5682 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:48:32.747000 audit[5682]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe4dbdb7e0 a2=0 a3=0 items=0 ppid=4134 pid=5682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:32.747000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:48:32.782000 audit: BPF prog-id=250 op=LOAD Jan 20 06:48:32.782000 audit: BPF prog-id=251 op=LOAD Jan 20 06:48:32.782000 audit[5670]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5659 pid=5670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:32.782000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034323861356433666566316633633236643234626131333335313631 Jan 20 06:48:32.783000 audit: BPF prog-id=251 op=UNLOAD Jan 20 06:48:32.783000 audit[5670]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5659 pid=5670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:32.783000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034323861356433666566316633633236643234626131333335313631 Jan 20 06:48:32.783000 audit: BPF prog-id=252 op=LOAD Jan 20 06:48:32.783000 audit[5670]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5659 pid=5670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:32.783000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034323861356433666566316633633236643234626131333335313631 Jan 20 06:48:32.783000 audit: BPF prog-id=253 op=LOAD Jan 20 06:48:32.783000 audit[5670]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5659 pid=5670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:32.783000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034323861356433666566316633633236643234626131333335313631 Jan 20 06:48:32.783000 audit: BPF prog-id=253 op=UNLOAD Jan 20 06:48:32.783000 audit[5670]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5659 pid=5670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:32.783000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034323861356433666566316633633236643234626131333335313631 Jan 20 06:48:32.783000 audit: BPF prog-id=252 op=UNLOAD Jan 20 06:48:32.783000 audit[5670]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5659 pid=5670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:32.783000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034323861356433666566316633633236643234626131333335313631 Jan 20 06:48:32.783000 audit: BPF prog-id=254 op=LOAD Jan 20 06:48:32.783000 audit[5670]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5659 pid=5670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:32.783000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034323861356433666566316633633236643234626131333335313631 Jan 20 06:48:32.807638 systemd-networkd[2169]: cali892856c7b6c: Link UP Jan 20 06:48:32.808090 systemd-networkd[2169]: cali892856c7b6c: Gained carrier Jan 20 06:48:32.849282 containerd[2553]: 2026-01-20 06:48:32.600 [INFO][5602] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4585.0.0--n--7cf3a16d5e-k8s-coredns--668d6bf9bc--4zr2j-eth0 coredns-668d6bf9bc- kube-system 02cc045c-02cb-4e4c-b380-c45b5c3edaed 833 0 2026-01-20 06:47:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4585.0.0-n-7cf3a16d5e coredns-668d6bf9bc-4zr2j eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali892856c7b6c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6505f98c9050648cd75e2c4b9675543774c43632c5b315dd477ba2e0d4491834" Namespace="kube-system" Pod="coredns-668d6bf9bc-4zr2j" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-coredns--668d6bf9bc--4zr2j-" Jan 20 06:48:32.849282 containerd[2553]: 2026-01-20 06:48:32.600 [INFO][5602] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6505f98c9050648cd75e2c4b9675543774c43632c5b315dd477ba2e0d4491834" Namespace="kube-system" Pod="coredns-668d6bf9bc-4zr2j" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-coredns--668d6bf9bc--4zr2j-eth0" Jan 20 06:48:32.849282 containerd[2553]: 2026-01-20 06:48:32.625 [INFO][5628] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6505f98c9050648cd75e2c4b9675543774c43632c5b315dd477ba2e0d4491834" HandleID="k8s-pod-network.6505f98c9050648cd75e2c4b9675543774c43632c5b315dd477ba2e0d4491834" Workload="ci--4585.0.0--n--7cf3a16d5e-k8s-coredns--668d6bf9bc--4zr2j-eth0" Jan 20 06:48:32.849282 containerd[2553]: 2026-01-20 06:48:32.625 [INFO][5628] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6505f98c9050648cd75e2c4b9675543774c43632c5b315dd477ba2e0d4491834" HandleID="k8s-pod-network.6505f98c9050648cd75e2c4b9675543774c43632c5b315dd477ba2e0d4491834" Workload="ci--4585.0.0--n--7cf3a16d5e-k8s-coredns--668d6bf9bc--4zr2j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5cb0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4585.0.0-n-7cf3a16d5e", "pod":"coredns-668d6bf9bc-4zr2j", "timestamp":"2026-01-20 06:48:32.625485981 +0000 UTC"}, Hostname:"ci-4585.0.0-n-7cf3a16d5e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 06:48:32.849282 containerd[2553]: 2026-01-20 06:48:32.625 [INFO][5628] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 06:48:32.849282 containerd[2553]: 2026-01-20 06:48:32.648 [INFO][5628] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 06:48:32.849282 containerd[2553]: 2026-01-20 06:48:32.648 [INFO][5628] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4585.0.0-n-7cf3a16d5e' Jan 20 06:48:32.849282 containerd[2553]: 2026-01-20 06:48:32.745 [INFO][5628] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6505f98c9050648cd75e2c4b9675543774c43632c5b315dd477ba2e0d4491834" host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:32.849282 containerd[2553]: 2026-01-20 06:48:32.757 [INFO][5628] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:32.849282 containerd[2553]: 2026-01-20 06:48:32.765 [INFO][5628] ipam/ipam.go 511: Trying affinity for 192.168.58.0/26 host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:32.849282 containerd[2553]: 2026-01-20 06:48:32.772 [INFO][5628] ipam/ipam.go 158: Attempting to load block cidr=192.168.58.0/26 host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:32.849282 containerd[2553]: 2026-01-20 06:48:32.775 [INFO][5628] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.58.0/26 host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:32.849282 containerd[2553]: 2026-01-20 06:48:32.775 [INFO][5628] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.58.0/26 handle="k8s-pod-network.6505f98c9050648cd75e2c4b9675543774c43632c5b315dd477ba2e0d4491834" host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:32.849282 containerd[2553]: 2026-01-20 06:48:32.776 [INFO][5628] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6505f98c9050648cd75e2c4b9675543774c43632c5b315dd477ba2e0d4491834 Jan 20 06:48:32.849282 containerd[2553]: 2026-01-20 06:48:32.781 [INFO][5628] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.58.0/26 handle="k8s-pod-network.6505f98c9050648cd75e2c4b9675543774c43632c5b315dd477ba2e0d4491834" host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:32.849282 containerd[2553]: 2026-01-20 06:48:32.800 [INFO][5628] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.58.6/26] block=192.168.58.0/26 handle="k8s-pod-network.6505f98c9050648cd75e2c4b9675543774c43632c5b315dd477ba2e0d4491834" host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:32.849282 containerd[2553]: 2026-01-20 06:48:32.800 [INFO][5628] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.58.6/26] handle="k8s-pod-network.6505f98c9050648cd75e2c4b9675543774c43632c5b315dd477ba2e0d4491834" host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:32.849282 containerd[2553]: 2026-01-20 06:48:32.800 [INFO][5628] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 06:48:32.849282 containerd[2553]: 2026-01-20 06:48:32.800 [INFO][5628] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.58.6/26] IPv6=[] ContainerID="6505f98c9050648cd75e2c4b9675543774c43632c5b315dd477ba2e0d4491834" HandleID="k8s-pod-network.6505f98c9050648cd75e2c4b9675543774c43632c5b315dd477ba2e0d4491834" Workload="ci--4585.0.0--n--7cf3a16d5e-k8s-coredns--668d6bf9bc--4zr2j-eth0" Jan 20 06:48:32.849770 containerd[2553]: 2026-01-20 06:48:32.802 [INFO][5602] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6505f98c9050648cd75e2c4b9675543774c43632c5b315dd477ba2e0d4491834" Namespace="kube-system" Pod="coredns-668d6bf9bc-4zr2j" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-coredns--668d6bf9bc--4zr2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4585.0.0--n--7cf3a16d5e-k8s-coredns--668d6bf9bc--4zr2j-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"02cc045c-02cb-4e4c-b380-c45b5c3edaed", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 47, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4585.0.0-n-7cf3a16d5e", ContainerID:"", Pod:"coredns-668d6bf9bc-4zr2j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali892856c7b6c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:48:32.849770 containerd[2553]: 2026-01-20 06:48:32.803 [INFO][5602] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.6/32] ContainerID="6505f98c9050648cd75e2c4b9675543774c43632c5b315dd477ba2e0d4491834" Namespace="kube-system" Pod="coredns-668d6bf9bc-4zr2j" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-coredns--668d6bf9bc--4zr2j-eth0" Jan 20 06:48:32.849770 containerd[2553]: 2026-01-20 06:48:32.803 [INFO][5602] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali892856c7b6c ContainerID="6505f98c9050648cd75e2c4b9675543774c43632c5b315dd477ba2e0d4491834" Namespace="kube-system" Pod="coredns-668d6bf9bc-4zr2j" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-coredns--668d6bf9bc--4zr2j-eth0" Jan 20 06:48:32.849770 containerd[2553]: 2026-01-20 06:48:32.806 [INFO][5602] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6505f98c9050648cd75e2c4b9675543774c43632c5b315dd477ba2e0d4491834" Namespace="kube-system" Pod="coredns-668d6bf9bc-4zr2j" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-coredns--668d6bf9bc--4zr2j-eth0" Jan 20 06:48:32.849770 containerd[2553]: 2026-01-20 06:48:32.806 [INFO][5602] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6505f98c9050648cd75e2c4b9675543774c43632c5b315dd477ba2e0d4491834" Namespace="kube-system" Pod="coredns-668d6bf9bc-4zr2j" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-coredns--668d6bf9bc--4zr2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4585.0.0--n--7cf3a16d5e-k8s-coredns--668d6bf9bc--4zr2j-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"02cc045c-02cb-4e4c-b380-c45b5c3edaed", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 47, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4585.0.0-n-7cf3a16d5e", ContainerID:"6505f98c9050648cd75e2c4b9675543774c43632c5b315dd477ba2e0d4491834", Pod:"coredns-668d6bf9bc-4zr2j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali892856c7b6c", MAC:"26:19:df:7d:55:59", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:48:32.849770 containerd[2553]: 2026-01-20 06:48:32.846 [INFO][5602] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6505f98c9050648cd75e2c4b9675543774c43632c5b315dd477ba2e0d4491834" Namespace="kube-system" Pod="coredns-668d6bf9bc-4zr2j" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-coredns--668d6bf9bc--4zr2j-eth0" Jan 20 06:48:32.880448 containerd[2553]: time="2026-01-20T06:48:32.880175717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2tllm,Uid:7274afc5-8df6-4ee4-b52e-bf6155c0f0e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"0428a5d3fef1f3c26d24ba1335161d6318a68b896894cabef85436b19e1213c2\"" Jan 20 06:48:32.883334 containerd[2553]: time="2026-01-20T06:48:32.883051473Z" level=info msg="CreateContainer within sandbox \"0428a5d3fef1f3c26d24ba1335161d6318a68b896894cabef85436b19e1213c2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 06:48:32.897295 containerd[2553]: time="2026-01-20T06:48:32.897272798Z" level=info msg="connecting to shim 6505f98c9050648cd75e2c4b9675543774c43632c5b315dd477ba2e0d4491834" address="unix:///run/containerd/s/776a6a4947aad0c0d87043bae6b765b96c2f6454cdb8a0c61158f5b14080e23d" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:48:32.903719 containerd[2553]: time="2026-01-20T06:48:32.903697776Z" level=info msg="Container 8915d8958cb0fef1d3dd1c924fb9ecc8c797e835e093b36fe3d66845c2aa4ddb: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:48:32.911000 audit[5724]: NETFILTER_CFG table=filter:136 family=2 entries=40 op=nft_register_chain pid=5724 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 06:48:32.911000 audit[5724]: SYSCALL arch=c000003e syscall=46 success=yes exit=20312 a0=3 a1=7ffe2f11c630 a2=0 a3=7ffe2f11c61c items=0 ppid=5130 pid=5724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:32.911000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 06:48:32.919009 containerd[2553]: time="2026-01-20T06:48:32.918972999Z" level=info msg="CreateContainer within sandbox \"0428a5d3fef1f3c26d24ba1335161d6318a68b896894cabef85436b19e1213c2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8915d8958cb0fef1d3dd1c924fb9ecc8c797e835e093b36fe3d66845c2aa4ddb\"" Jan 20 06:48:32.920238 containerd[2553]: time="2026-01-20T06:48:32.920082149Z" level=info msg="StartContainer for \"8915d8958cb0fef1d3dd1c924fb9ecc8c797e835e093b36fe3d66845c2aa4ddb\"" Jan 20 06:48:32.920863 containerd[2553]: time="2026-01-20T06:48:32.920841295Z" level=info msg="connecting to shim 8915d8958cb0fef1d3dd1c924fb9ecc8c797e835e093b36fe3d66845c2aa4ddb" address="unix:///run/containerd/s/f61fd2c675ee552bf57eea90ca52b2c7e673538483a79376c92c40f8f2e5acd0" protocol=ttrpc version=3 Jan 20 06:48:32.938370 systemd[1]: Started cri-containerd-6505f98c9050648cd75e2c4b9675543774c43632c5b315dd477ba2e0d4491834.scope - libcontainer container 6505f98c9050648cd75e2c4b9675543774c43632c5b315dd477ba2e0d4491834. Jan 20 06:48:32.947811 systemd[1]: Started cri-containerd-8915d8958cb0fef1d3dd1c924fb9ecc8c797e835e093b36fe3d66845c2aa4ddb.scope - libcontainer container 8915d8958cb0fef1d3dd1c924fb9ecc8c797e835e093b36fe3d66845c2aa4ddb. Jan 20 06:48:32.964000 audit: BPF prog-id=255 op=LOAD Jan 20 06:48:32.965000 audit: BPF prog-id=256 op=LOAD Jan 20 06:48:32.965000 audit[5734]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5659 pid=5734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:32.965000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839313564383935386362306665663164336464316339323466623965 Jan 20 06:48:32.965000 audit: BPF prog-id=256 op=UNLOAD Jan 20 06:48:32.965000 audit[5734]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5659 pid=5734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:32.965000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839313564383935386362306665663164336464316339323466623965 Jan 20 06:48:32.966000 audit: BPF prog-id=257 op=LOAD Jan 20 06:48:32.966000 audit[5734]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5659 pid=5734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:32.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839313564383935386362306665663164336464316339323466623965 Jan 20 06:48:32.966000 audit: BPF prog-id=258 op=LOAD Jan 20 06:48:32.966000 audit[5734]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5659 pid=5734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:32.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839313564383935386362306665663164336464316339323466623965 Jan 20 06:48:32.966000 audit: BPF prog-id=258 op=UNLOAD Jan 20 06:48:32.966000 audit[5734]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5659 pid=5734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:32.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839313564383935386362306665663164336464316339323466623965 Jan 20 06:48:32.966000 audit: BPF prog-id=257 op=UNLOAD Jan 20 06:48:32.966000 audit[5734]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5659 pid=5734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:32.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839313564383935386362306665663164336464316339323466623965 Jan 20 06:48:32.966000 audit: BPF prog-id=259 op=LOAD Jan 20 06:48:32.966000 audit[5734]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5659 pid=5734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:32.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839313564383935386362306665663164336464316339323466623965 Jan 20 06:48:32.969000 audit: BPF prog-id=260 op=LOAD Jan 20 06:48:32.970000 audit: BPF prog-id=261 op=LOAD Jan 20 06:48:32.970000 audit[5723]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5714 pid=5723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:32.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635303566393863393035303634386364373565326334623936373535 Jan 20 06:48:32.970000 audit: BPF prog-id=261 op=UNLOAD Jan 20 06:48:32.970000 audit[5723]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5714 pid=5723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:32.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635303566393863393035303634386364373565326334623936373535 Jan 20 06:48:32.971000 audit: BPF prog-id=262 op=LOAD Jan 20 06:48:32.971000 audit[5723]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5714 pid=5723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:32.971000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635303566393863393035303634386364373565326334623936373535 Jan 20 06:48:32.971000 audit: BPF prog-id=263 op=LOAD Jan 20 06:48:32.971000 audit[5723]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5714 pid=5723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:32.971000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635303566393863393035303634386364373565326334623936373535 Jan 20 06:48:32.971000 audit: BPF prog-id=263 op=UNLOAD Jan 20 06:48:32.971000 audit[5723]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5714 pid=5723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:32.971000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635303566393863393035303634386364373565326334623936373535 Jan 20 06:48:32.971000 audit: BPF prog-id=262 op=UNLOAD Jan 20 06:48:32.971000 audit[5723]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5714 pid=5723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:32.971000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635303566393863393035303634386364373565326334623936373535 Jan 20 06:48:32.971000 audit: BPF prog-id=264 op=LOAD Jan 20 06:48:32.971000 audit[5723]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5714 pid=5723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:32.971000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3635303566393863393035303634386364373565326334623936373535 Jan 20 06:48:32.990592 containerd[2553]: time="2026-01-20T06:48:32.990573351Z" level=info msg="StartContainer for \"8915d8958cb0fef1d3dd1c924fb9ecc8c797e835e093b36fe3d66845c2aa4ddb\" returns successfully" Jan 20 06:48:33.024335 containerd[2553]: time="2026-01-20T06:48:33.024314491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4zr2j,Uid:02cc045c-02cb-4e4c-b380-c45b5c3edaed,Namespace:kube-system,Attempt:0,} returns sandbox id \"6505f98c9050648cd75e2c4b9675543774c43632c5b315dd477ba2e0d4491834\"" Jan 20 06:48:33.029446 containerd[2553]: time="2026-01-20T06:48:33.029408819Z" level=info msg="CreateContainer within sandbox \"6505f98c9050648cd75e2c4b9675543774c43632c5b315dd477ba2e0d4491834\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 06:48:33.049612 containerd[2553]: time="2026-01-20T06:48:33.049335698Z" level=info msg="Container 6ede210b1ee30f5a5814993f173c5ba634aff32688e18a4ef94cb2e196dd8f25: CDI devices from CRI Config.CDIDevices: []" Jan 20 06:48:33.065165 containerd[2553]: time="2026-01-20T06:48:33.064763674Z" level=info msg="CreateContainer within sandbox \"6505f98c9050648cd75e2c4b9675543774c43632c5b315dd477ba2e0d4491834\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6ede210b1ee30f5a5814993f173c5ba634aff32688e18a4ef94cb2e196dd8f25\"" Jan 20 06:48:33.065283 containerd[2553]: time="2026-01-20T06:48:33.065177162Z" level=info msg="StartContainer for \"6ede210b1ee30f5a5814993f173c5ba634aff32688e18a4ef94cb2e196dd8f25\"" Jan 20 06:48:33.066570 containerd[2553]: time="2026-01-20T06:48:33.066299071Z" level=info msg="connecting to shim 6ede210b1ee30f5a5814993f173c5ba634aff32688e18a4ef94cb2e196dd8f25" address="unix:///run/containerd/s/776a6a4947aad0c0d87043bae6b765b96c2f6454cdb8a0c61158f5b14080e23d" protocol=ttrpc version=3 Jan 20 06:48:33.086383 systemd[1]: Started cri-containerd-6ede210b1ee30f5a5814993f173c5ba634aff32688e18a4ef94cb2e196dd8f25.scope - libcontainer container 6ede210b1ee30f5a5814993f173c5ba634aff32688e18a4ef94cb2e196dd8f25. Jan 20 06:48:33.096000 audit: BPF prog-id=265 op=LOAD Jan 20 06:48:33.097000 audit: BPF prog-id=266 op=LOAD Jan 20 06:48:33.097000 audit[5783]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=5714 pid=5783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:33.097000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665646532313062316565333066356135383134393933663137336335 Jan 20 06:48:33.097000 audit: BPF prog-id=266 op=UNLOAD Jan 20 06:48:33.097000 audit[5783]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5714 pid=5783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:33.097000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665646532313062316565333066356135383134393933663137336335 Jan 20 06:48:33.097000 audit: BPF prog-id=267 op=LOAD Jan 20 06:48:33.097000 audit[5783]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=5714 pid=5783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:33.097000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665646532313062316565333066356135383134393933663137336335 Jan 20 06:48:33.097000 audit: BPF prog-id=268 op=LOAD Jan 20 06:48:33.097000 audit[5783]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=5714 pid=5783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:33.097000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665646532313062316565333066356135383134393933663137336335 Jan 20 06:48:33.097000 audit: BPF prog-id=268 op=UNLOAD Jan 20 06:48:33.097000 audit[5783]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5714 pid=5783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:33.097000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665646532313062316565333066356135383134393933663137336335 Jan 20 06:48:33.097000 audit: BPF prog-id=267 op=UNLOAD Jan 20 06:48:33.097000 audit[5783]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5714 pid=5783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:33.097000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665646532313062316565333066356135383134393933663137336335 Jan 20 06:48:33.097000 audit: BPF prog-id=269 op=LOAD Jan 20 06:48:33.097000 audit[5783]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=5714 pid=5783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:33.097000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665646532313062316565333066356135383134393933663137336335 Jan 20 06:48:33.120521 containerd[2553]: time="2026-01-20T06:48:33.120363564Z" level=info msg="StartContainer for \"6ede210b1ee30f5a5814993f173c5ba634aff32688e18a4ef94cb2e196dd8f25\" returns successfully" Jan 20 06:48:33.547674 containerd[2553]: time="2026-01-20T06:48:33.547602637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64f54f655c-d627v,Uid:6205d977-3cd2-45d3-97f2-85111cfa22a7,Namespace:calico-apiserver,Attempt:0,}" Jan 20 06:48:33.631107 systemd-networkd[2169]: caliecef3da96c4: Link UP Jan 20 06:48:33.631872 systemd-networkd[2169]: caliecef3da96c4: Gained carrier Jan 20 06:48:33.645645 containerd[2553]: 2026-01-20 06:48:33.584 [INFO][5816] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4585.0.0--n--7cf3a16d5e-k8s-calico--apiserver--64f54f655c--d627v-eth0 calico-apiserver-64f54f655c- calico-apiserver 6205d977-3cd2-45d3-97f2-85111cfa22a7 839 0 2026-01-20 06:47:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64f54f655c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4585.0.0-n-7cf3a16d5e calico-apiserver-64f54f655c-d627v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliecef3da96c4 [] [] }} ContainerID="c5e80bfdf7086b694987fc2ca2548683139c510e446e5600189af5399ed50ed5" Namespace="calico-apiserver" Pod="calico-apiserver-64f54f655c-d627v" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-calico--apiserver--64f54f655c--d627v-" Jan 20 06:48:33.645645 containerd[2553]: 2026-01-20 06:48:33.585 [INFO][5816] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c5e80bfdf7086b694987fc2ca2548683139c510e446e5600189af5399ed50ed5" Namespace="calico-apiserver" Pod="calico-apiserver-64f54f655c-d627v" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-calico--apiserver--64f54f655c--d627v-eth0" Jan 20 06:48:33.645645 containerd[2553]: 2026-01-20 06:48:33.603 [INFO][5828] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c5e80bfdf7086b694987fc2ca2548683139c510e446e5600189af5399ed50ed5" HandleID="k8s-pod-network.c5e80bfdf7086b694987fc2ca2548683139c510e446e5600189af5399ed50ed5" Workload="ci--4585.0.0--n--7cf3a16d5e-k8s-calico--apiserver--64f54f655c--d627v-eth0" Jan 20 06:48:33.645645 containerd[2553]: 2026-01-20 06:48:33.603 [INFO][5828] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c5e80bfdf7086b694987fc2ca2548683139c510e446e5600189af5399ed50ed5" HandleID="k8s-pod-network.c5e80bfdf7086b694987fc2ca2548683139c510e446e5600189af5399ed50ed5" Workload="ci--4585.0.0--n--7cf3a16d5e-k8s-calico--apiserver--64f54f655c--d627v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f180), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4585.0.0-n-7cf3a16d5e", "pod":"calico-apiserver-64f54f655c-d627v", "timestamp":"2026-01-20 06:48:33.60375308 +0000 UTC"}, Hostname:"ci-4585.0.0-n-7cf3a16d5e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 06:48:33.645645 containerd[2553]: 2026-01-20 06:48:33.603 [INFO][5828] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 06:48:33.645645 containerd[2553]: 2026-01-20 06:48:33.603 [INFO][5828] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 06:48:33.645645 containerd[2553]: 2026-01-20 06:48:33.603 [INFO][5828] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4585.0.0-n-7cf3a16d5e' Jan 20 06:48:33.645645 containerd[2553]: 2026-01-20 06:48:33.607 [INFO][5828] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c5e80bfdf7086b694987fc2ca2548683139c510e446e5600189af5399ed50ed5" host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:33.645645 containerd[2553]: 2026-01-20 06:48:33.609 [INFO][5828] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:33.645645 containerd[2553]: 2026-01-20 06:48:33.612 [INFO][5828] ipam/ipam.go 511: Trying affinity for 192.168.58.0/26 host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:33.645645 containerd[2553]: 2026-01-20 06:48:33.613 [INFO][5828] ipam/ipam.go 158: Attempting to load block cidr=192.168.58.0/26 host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:33.645645 containerd[2553]: 2026-01-20 06:48:33.614 [INFO][5828] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.58.0/26 host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:33.645645 containerd[2553]: 2026-01-20 06:48:33.614 [INFO][5828] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.58.0/26 handle="k8s-pod-network.c5e80bfdf7086b694987fc2ca2548683139c510e446e5600189af5399ed50ed5" host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:33.645645 containerd[2553]: 2026-01-20 06:48:33.615 [INFO][5828] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c5e80bfdf7086b694987fc2ca2548683139c510e446e5600189af5399ed50ed5 Jan 20 06:48:33.645645 containerd[2553]: 2026-01-20 06:48:33.620 [INFO][5828] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.58.0/26 handle="k8s-pod-network.c5e80bfdf7086b694987fc2ca2548683139c510e446e5600189af5399ed50ed5" host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:33.645645 containerd[2553]: 2026-01-20 06:48:33.627 [INFO][5828] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.58.7/26] block=192.168.58.0/26 handle="k8s-pod-network.c5e80bfdf7086b694987fc2ca2548683139c510e446e5600189af5399ed50ed5" host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:33.645645 containerd[2553]: 2026-01-20 06:48:33.627 [INFO][5828] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.58.7/26] handle="k8s-pod-network.c5e80bfdf7086b694987fc2ca2548683139c510e446e5600189af5399ed50ed5" host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:33.645645 containerd[2553]: 2026-01-20 06:48:33.627 [INFO][5828] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 06:48:33.645645 containerd[2553]: 2026-01-20 06:48:33.627 [INFO][5828] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.58.7/26] IPv6=[] ContainerID="c5e80bfdf7086b694987fc2ca2548683139c510e446e5600189af5399ed50ed5" HandleID="k8s-pod-network.c5e80bfdf7086b694987fc2ca2548683139c510e446e5600189af5399ed50ed5" Workload="ci--4585.0.0--n--7cf3a16d5e-k8s-calico--apiserver--64f54f655c--d627v-eth0" Jan 20 06:48:33.646529 containerd[2553]: 2026-01-20 06:48:33.629 [INFO][5816] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c5e80bfdf7086b694987fc2ca2548683139c510e446e5600189af5399ed50ed5" Namespace="calico-apiserver" Pod="calico-apiserver-64f54f655c-d627v" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-calico--apiserver--64f54f655c--d627v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4585.0.0--n--7cf3a16d5e-k8s-calico--apiserver--64f54f655c--d627v-eth0", GenerateName:"calico-apiserver-64f54f655c-", Namespace:"calico-apiserver", SelfLink:"", UID:"6205d977-3cd2-45d3-97f2-85111cfa22a7", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 47, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64f54f655c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4585.0.0-n-7cf3a16d5e", ContainerID:"", Pod:"calico-apiserver-64f54f655c-d627v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliecef3da96c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:48:33.646529 containerd[2553]: 2026-01-20 06:48:33.629 [INFO][5816] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.7/32] ContainerID="c5e80bfdf7086b694987fc2ca2548683139c510e446e5600189af5399ed50ed5" Namespace="calico-apiserver" Pod="calico-apiserver-64f54f655c-d627v" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-calico--apiserver--64f54f655c--d627v-eth0" Jan 20 06:48:33.646529 containerd[2553]: 2026-01-20 06:48:33.629 [INFO][5816] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliecef3da96c4 ContainerID="c5e80bfdf7086b694987fc2ca2548683139c510e446e5600189af5399ed50ed5" Namespace="calico-apiserver" Pod="calico-apiserver-64f54f655c-d627v" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-calico--apiserver--64f54f655c--d627v-eth0" Jan 20 06:48:33.646529 containerd[2553]: 2026-01-20 06:48:33.632 [INFO][5816] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c5e80bfdf7086b694987fc2ca2548683139c510e446e5600189af5399ed50ed5" Namespace="calico-apiserver" Pod="calico-apiserver-64f54f655c-d627v" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-calico--apiserver--64f54f655c--d627v-eth0" Jan 20 06:48:33.646529 containerd[2553]: 2026-01-20 06:48:33.633 [INFO][5816] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c5e80bfdf7086b694987fc2ca2548683139c510e446e5600189af5399ed50ed5" Namespace="calico-apiserver" Pod="calico-apiserver-64f54f655c-d627v" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-calico--apiserver--64f54f655c--d627v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4585.0.0--n--7cf3a16d5e-k8s-calico--apiserver--64f54f655c--d627v-eth0", GenerateName:"calico-apiserver-64f54f655c-", Namespace:"calico-apiserver", SelfLink:"", UID:"6205d977-3cd2-45d3-97f2-85111cfa22a7", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 47, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64f54f655c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4585.0.0-n-7cf3a16d5e", ContainerID:"c5e80bfdf7086b694987fc2ca2548683139c510e446e5600189af5399ed50ed5", Pod:"calico-apiserver-64f54f655c-d627v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliecef3da96c4", MAC:"7e:29:7b:c0:90:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:48:33.646529 containerd[2553]: 2026-01-20 06:48:33.643 [INFO][5816] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c5e80bfdf7086b694987fc2ca2548683139c510e446e5600189af5399ed50ed5" Namespace="calico-apiserver" Pod="calico-apiserver-64f54f655c-d627v" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-calico--apiserver--64f54f655c--d627v-eth0" Jan 20 06:48:33.655000 audit[5841]: NETFILTER_CFG table=filter:137 family=2 entries=49 op=nft_register_chain pid=5841 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 06:48:33.657540 kernel: kauditd_printk_skb: 412 callbacks suppressed Jan 20 06:48:33.657593 kernel: audit: type=1325 audit(1768891713.655:747): table=filter:137 family=2 entries=49 op=nft_register_chain pid=5841 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 06:48:33.655000 audit[5841]: SYSCALL arch=c000003e syscall=46 success=yes exit=25420 a0=3 a1=7ffdd98e5160 a2=0 a3=7ffdd98e514c items=0 ppid=5130 pid=5841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:33.672426 kernel: audit: type=1300 audit(1768891713.655:747): arch=c000003e syscall=46 success=yes exit=25420 a0=3 a1=7ffdd98e5160 a2=0 a3=7ffdd98e514c items=0 ppid=5130 pid=5841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:33.673977 systemd-networkd[2169]: cali7bcc08c8eda: Gained IPv6LL Jan 20 06:48:33.674390 kernel: audit: type=1327 audit(1768891713.655:747): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 06:48:33.655000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 06:48:33.688182 kubelet[4005]: E0120 06:48:33.688056 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-569b956df8-vdchn" podUID="6a07f6bf-8507-4691-9e22-698d9549bb6f" Jan 20 06:48:33.689072 kubelet[4005]: E0120 06:48:33.689050 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c4kj6" podUID="521ce380-6f9e-4050-b213-569fcc069aed" Jan 20 06:48:33.692436 containerd[2553]: time="2026-01-20T06:48:33.692229393Z" level=info msg="connecting to shim c5e80bfdf7086b694987fc2ca2548683139c510e446e5600189af5399ed50ed5" address="unix:///run/containerd/s/f7d9b50fd2554b86ae58827ffb3ad0356951378c8de3f59d7b7694d4aab3a92e" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:48:33.719097 kubelet[4005]: I0120 06:48:33.718869 4005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4zr2j" podStartSLOduration=48.71885555 podStartE2EDuration="48.71885555s" podCreationTimestamp="2026-01-20 06:47:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 06:48:33.702475278 +0000 UTC m=+53.232295674" watchObservedRunningTime="2026-01-20 06:48:33.71885555 +0000 UTC m=+53.248675967" Jan 20 06:48:33.725115 systemd[1]: Started cri-containerd-c5e80bfdf7086b694987fc2ca2548683139c510e446e5600189af5399ed50ed5.scope - libcontainer container c5e80bfdf7086b694987fc2ca2548683139c510e446e5600189af5399ed50ed5. Jan 20 06:48:33.731734 kernel: audit: type=1325 audit(1768891713.727:748): table=filter:138 family=2 entries=20 op=nft_register_rule pid=5875 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:48:33.727000 audit[5875]: NETFILTER_CFG table=filter:138 family=2 entries=20 op=nft_register_rule pid=5875 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:48:33.727000 audit[5875]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc6b39a8c0 a2=0 a3=7ffc6b39a8ac items=0 ppid=4134 pid=5875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:33.737422 systemd-networkd[2169]: calidff034db9ab: Gained IPv6LL Jan 20 06:48:33.739143 kernel: audit: type=1300 audit(1768891713.727:748): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc6b39a8c0 a2=0 a3=7ffc6b39a8ac items=0 ppid=4134 pid=5875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:33.727000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:48:33.744228 kernel: audit: type=1327 audit(1768891713.727:748): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:48:33.737000 audit[5875]: NETFILTER_CFG table=nat:139 family=2 entries=14 op=nft_register_rule pid=5875 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:48:33.749229 kernel: audit: type=1325 audit(1768891713.737:749): table=nat:139 family=2 entries=14 op=nft_register_rule pid=5875 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:48:33.737000 audit[5875]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc6b39a8c0 a2=0 a3=0 items=0 ppid=4134 pid=5875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:33.755307 kernel: audit: type=1300 audit(1768891713.737:749): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc6b39a8c0 a2=0 a3=0 items=0 ppid=4134 pid=5875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:33.737000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:48:33.761263 kernel: audit: type=1327 audit(1768891713.737:749): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:48:33.758000 audit: BPF prog-id=270 op=LOAD Jan 20 06:48:33.764225 kernel: audit: type=1334 audit(1768891713.758:750): prog-id=270 op=LOAD Jan 20 06:48:33.759000 audit: BPF prog-id=271 op=LOAD Jan 20 06:48:33.759000 audit[5864]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5850 pid=5864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:33.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335653830626664663730383662363934393837666332636132353438 Jan 20 06:48:33.759000 audit: BPF prog-id=271 op=UNLOAD Jan 20 06:48:33.759000 audit[5864]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5850 pid=5864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:33.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335653830626664663730383662363934393837666332636132353438 Jan 20 06:48:33.759000 audit: BPF prog-id=272 op=LOAD Jan 20 06:48:33.759000 audit[5864]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5850 pid=5864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:33.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335653830626664663730383662363934393837666332636132353438 Jan 20 06:48:33.759000 audit: BPF prog-id=273 op=LOAD Jan 20 06:48:33.759000 audit[5864]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5850 pid=5864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:33.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335653830626664663730383662363934393837666332636132353438 Jan 20 06:48:33.759000 audit: BPF prog-id=273 op=UNLOAD Jan 20 06:48:33.759000 audit[5864]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5850 pid=5864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:33.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335653830626664663730383662363934393837666332636132353438 Jan 20 06:48:33.759000 audit: BPF prog-id=272 op=UNLOAD Jan 20 06:48:33.759000 audit[5864]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5850 pid=5864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:33.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335653830626664663730383662363934393837666332636132353438 Jan 20 06:48:33.759000 audit: BPF prog-id=274 op=LOAD Jan 20 06:48:33.759000 audit[5864]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5850 pid=5864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:33.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335653830626664663730383662363934393837666332636132353438 Jan 20 06:48:33.776931 kubelet[4005]: I0120 06:48:33.776893 4005 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2tllm" podStartSLOduration=48.776881243 podStartE2EDuration="48.776881243s" podCreationTimestamp="2026-01-20 06:47:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 06:48:33.739412274 +0000 UTC m=+53.269232670" watchObservedRunningTime="2026-01-20 06:48:33.776881243 +0000 UTC m=+53.306701642" Jan 20 06:48:33.820614 containerd[2553]: time="2026-01-20T06:48:33.820534112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64f54f655c-d627v,Uid:6205d977-3cd2-45d3-97f2-85111cfa22a7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c5e80bfdf7086b694987fc2ca2548683139c510e446e5600189af5399ed50ed5\"" Jan 20 06:48:33.822641 containerd[2553]: time="2026-01-20T06:48:33.822611720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 06:48:33.993532 systemd-networkd[2169]: calia1ea1637bd1: Gained IPv6LL Jan 20 06:48:33.993861 systemd-networkd[2169]: cali892856c7b6c: Gained IPv6LL Jan 20 06:48:34.065586 containerd[2553]: time="2026-01-20T06:48:34.065564337Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:48:34.068048 containerd[2553]: time="2026-01-20T06:48:34.068021746Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 06:48:34.068101 containerd[2553]: time="2026-01-20T06:48:34.068078459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 06:48:34.068214 kubelet[4005]: E0120 06:48:34.068174 4005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:48:34.068250 kubelet[4005]: E0120 06:48:34.068221 4005 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:48:34.068338 kubelet[4005]: E0120 06:48:34.068311 4005 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g2w5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64f54f655c-d627v_calico-apiserver(6205d977-3cd2-45d3-97f2-85111cfa22a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 06:48:34.069982 kubelet[4005]: E0120 06:48:34.069959 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64f54f655c-d627v" podUID="6205d977-3cd2-45d3-97f2-85111cfa22a7" Jan 20 06:48:34.690151 kubelet[4005]: E0120 06:48:34.689974 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64f54f655c-d627v" podUID="6205d977-3cd2-45d3-97f2-85111cfa22a7" Jan 20 06:48:34.709000 audit[5892]: NETFILTER_CFG table=filter:140 family=2 entries=17 op=nft_register_rule pid=5892 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:48:34.709000 audit[5892]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff793d03b0 a2=0 a3=7fff793d039c items=0 ppid=4134 pid=5892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:34.709000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:48:34.732000 audit[5892]: NETFILTER_CFG table=nat:141 family=2 entries=47 op=nft_register_chain pid=5892 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:48:34.732000 audit[5892]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fff793d03b0 a2=0 a3=7fff793d039c items=0 ppid=4134 pid=5892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:34.732000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:48:35.017392 systemd-networkd[2169]: caliecef3da96c4: Gained IPv6LL Jan 20 06:48:35.548008 containerd[2553]: time="2026-01-20T06:48:35.547987227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r95bt,Uid:feca3a47-a9f0-4272-a08e-b4b137171f9f,Namespace:calico-system,Attempt:0,}" Jan 20 06:48:35.634022 systemd-networkd[2169]: cali5e601fa8358: Link UP Jan 20 06:48:35.634747 systemd-networkd[2169]: cali5e601fa8358: Gained carrier Jan 20 06:48:35.650311 containerd[2553]: 2026-01-20 06:48:35.582 [INFO][5895] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4585.0.0--n--7cf3a16d5e-k8s-csi--node--driver--r95bt-eth0 csi-node-driver- calico-system feca3a47-a9f0-4272-a08e-b4b137171f9f 718 0 2026-01-20 06:47:59 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4585.0.0-n-7cf3a16d5e csi-node-driver-r95bt eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5e601fa8358 [] [] }} ContainerID="72d9fa7899ed13c0a9aaa032aed36e145d7c13728b2bcb455bc3cf4e256d250a" Namespace="calico-system" Pod="csi-node-driver-r95bt" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-csi--node--driver--r95bt-" Jan 20 06:48:35.650311 containerd[2553]: 2026-01-20 06:48:35.582 [INFO][5895] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="72d9fa7899ed13c0a9aaa032aed36e145d7c13728b2bcb455bc3cf4e256d250a" Namespace="calico-system" Pod="csi-node-driver-r95bt" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-csi--node--driver--r95bt-eth0" Jan 20 06:48:35.650311 containerd[2553]: 2026-01-20 06:48:35.604 [INFO][5906] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="72d9fa7899ed13c0a9aaa032aed36e145d7c13728b2bcb455bc3cf4e256d250a" HandleID="k8s-pod-network.72d9fa7899ed13c0a9aaa032aed36e145d7c13728b2bcb455bc3cf4e256d250a" Workload="ci--4585.0.0--n--7cf3a16d5e-k8s-csi--node--driver--r95bt-eth0" Jan 20 06:48:35.650311 containerd[2553]: 2026-01-20 06:48:35.604 [INFO][5906] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="72d9fa7899ed13c0a9aaa032aed36e145d7c13728b2bcb455bc3cf4e256d250a" HandleID="k8s-pod-network.72d9fa7899ed13c0a9aaa032aed36e145d7c13728b2bcb455bc3cf4e256d250a" Workload="ci--4585.0.0--n--7cf3a16d5e-k8s-csi--node--driver--r95bt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f270), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4585.0.0-n-7cf3a16d5e", "pod":"csi-node-driver-r95bt", "timestamp":"2026-01-20 06:48:35.604136313 +0000 UTC"}, Hostname:"ci-4585.0.0-n-7cf3a16d5e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 06:48:35.650311 containerd[2553]: 2026-01-20 06:48:35.604 [INFO][5906] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 06:48:35.650311 containerd[2553]: 2026-01-20 06:48:35.604 [INFO][5906] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 06:48:35.650311 containerd[2553]: 2026-01-20 06:48:35.604 [INFO][5906] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4585.0.0-n-7cf3a16d5e' Jan 20 06:48:35.650311 containerd[2553]: 2026-01-20 06:48:35.609 [INFO][5906] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.72d9fa7899ed13c0a9aaa032aed36e145d7c13728b2bcb455bc3cf4e256d250a" host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:35.650311 containerd[2553]: 2026-01-20 06:48:35.612 [INFO][5906] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:35.650311 containerd[2553]: 2026-01-20 06:48:35.615 [INFO][5906] ipam/ipam.go 511: Trying affinity for 192.168.58.0/26 host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:35.650311 containerd[2553]: 2026-01-20 06:48:35.616 [INFO][5906] ipam/ipam.go 158: Attempting to load block cidr=192.168.58.0/26 host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:35.650311 containerd[2553]: 2026-01-20 06:48:35.618 [INFO][5906] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.58.0/26 host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:35.650311 containerd[2553]: 2026-01-20 06:48:35.618 [INFO][5906] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.58.0/26 handle="k8s-pod-network.72d9fa7899ed13c0a9aaa032aed36e145d7c13728b2bcb455bc3cf4e256d250a" host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:35.650311 containerd[2553]: 2026-01-20 06:48:35.619 [INFO][5906] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.72d9fa7899ed13c0a9aaa032aed36e145d7c13728b2bcb455bc3cf4e256d250a Jan 20 06:48:35.650311 containerd[2553]: 2026-01-20 06:48:35.623 [INFO][5906] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.58.0/26 handle="k8s-pod-network.72d9fa7899ed13c0a9aaa032aed36e145d7c13728b2bcb455bc3cf4e256d250a" host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:35.650311 containerd[2553]: 2026-01-20 06:48:35.630 [INFO][5906] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.58.8/26] block=192.168.58.0/26 handle="k8s-pod-network.72d9fa7899ed13c0a9aaa032aed36e145d7c13728b2bcb455bc3cf4e256d250a" host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:35.650311 containerd[2553]: 2026-01-20 06:48:35.630 [INFO][5906] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.58.8/26] handle="k8s-pod-network.72d9fa7899ed13c0a9aaa032aed36e145d7c13728b2bcb455bc3cf4e256d250a" host="ci-4585.0.0-n-7cf3a16d5e" Jan 20 06:48:35.650311 containerd[2553]: 2026-01-20 06:48:35.630 [INFO][5906] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 06:48:35.650311 containerd[2553]: 2026-01-20 06:48:35.630 [INFO][5906] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.58.8/26] IPv6=[] ContainerID="72d9fa7899ed13c0a9aaa032aed36e145d7c13728b2bcb455bc3cf4e256d250a" HandleID="k8s-pod-network.72d9fa7899ed13c0a9aaa032aed36e145d7c13728b2bcb455bc3cf4e256d250a" Workload="ci--4585.0.0--n--7cf3a16d5e-k8s-csi--node--driver--r95bt-eth0" Jan 20 06:48:35.650775 containerd[2553]: 2026-01-20 06:48:35.632 [INFO][5895] cni-plugin/k8s.go 418: Populated endpoint ContainerID="72d9fa7899ed13c0a9aaa032aed36e145d7c13728b2bcb455bc3cf4e256d250a" Namespace="calico-system" Pod="csi-node-driver-r95bt" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-csi--node--driver--r95bt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4585.0.0--n--7cf3a16d5e-k8s-csi--node--driver--r95bt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"feca3a47-a9f0-4272-a08e-b4b137171f9f", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 47, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4585.0.0-n-7cf3a16d5e", ContainerID:"", Pod:"csi-node-driver-r95bt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.58.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5e601fa8358", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:48:35.650775 containerd[2553]: 2026-01-20 06:48:35.632 [INFO][5895] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.8/32] ContainerID="72d9fa7899ed13c0a9aaa032aed36e145d7c13728b2bcb455bc3cf4e256d250a" Namespace="calico-system" Pod="csi-node-driver-r95bt" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-csi--node--driver--r95bt-eth0" Jan 20 06:48:35.650775 containerd[2553]: 2026-01-20 06:48:35.632 [INFO][5895] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5e601fa8358 ContainerID="72d9fa7899ed13c0a9aaa032aed36e145d7c13728b2bcb455bc3cf4e256d250a" Namespace="calico-system" Pod="csi-node-driver-r95bt" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-csi--node--driver--r95bt-eth0" Jan 20 06:48:35.650775 containerd[2553]: 2026-01-20 06:48:35.634 [INFO][5895] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="72d9fa7899ed13c0a9aaa032aed36e145d7c13728b2bcb455bc3cf4e256d250a" Namespace="calico-system" Pod="csi-node-driver-r95bt" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-csi--node--driver--r95bt-eth0" Jan 20 06:48:35.650775 containerd[2553]: 2026-01-20 06:48:35.635 [INFO][5895] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="72d9fa7899ed13c0a9aaa032aed36e145d7c13728b2bcb455bc3cf4e256d250a" Namespace="calico-system" Pod="csi-node-driver-r95bt" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-csi--node--driver--r95bt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4585.0.0--n--7cf3a16d5e-k8s-csi--node--driver--r95bt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"feca3a47-a9f0-4272-a08e-b4b137171f9f", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 6, 47, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4585.0.0-n-7cf3a16d5e", ContainerID:"72d9fa7899ed13c0a9aaa032aed36e145d7c13728b2bcb455bc3cf4e256d250a", Pod:"csi-node-driver-r95bt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.58.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5e601fa8358", MAC:"a6:64:fb:67:7a:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 06:48:35.650775 containerd[2553]: 2026-01-20 06:48:35.648 [INFO][5895] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="72d9fa7899ed13c0a9aaa032aed36e145d7c13728b2bcb455bc3cf4e256d250a" Namespace="calico-system" Pod="csi-node-driver-r95bt" WorkloadEndpoint="ci--4585.0.0--n--7cf3a16d5e-k8s-csi--node--driver--r95bt-eth0" Jan 20 06:48:35.662000 audit[5920]: NETFILTER_CFG table=filter:142 family=2 entries=52 op=nft_register_chain pid=5920 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 06:48:35.662000 audit[5920]: SYSCALL arch=c000003e syscall=46 success=yes exit=24296 a0=3 a1=7ffc9b48ed60 a2=0 a3=7ffc9b48ed4c items=0 ppid=5130 pid=5920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:35.662000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 06:48:35.692681 kubelet[4005]: E0120 06:48:35.692507 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64f54f655c-d627v" podUID="6205d977-3cd2-45d3-97f2-85111cfa22a7" Jan 20 06:48:35.696436 containerd[2553]: time="2026-01-20T06:48:35.696408876Z" level=info msg="connecting to shim 72d9fa7899ed13c0a9aaa032aed36e145d7c13728b2bcb455bc3cf4e256d250a" address="unix:///run/containerd/s/70bf2322dc858ab2ea24b4d00f89571dbff7f3291f261661e0e96845b2026338" namespace=k8s.io protocol=ttrpc version=3 Jan 20 06:48:35.717350 systemd[1]: Started cri-containerd-72d9fa7899ed13c0a9aaa032aed36e145d7c13728b2bcb455bc3cf4e256d250a.scope - libcontainer container 72d9fa7899ed13c0a9aaa032aed36e145d7c13728b2bcb455bc3cf4e256d250a. Jan 20 06:48:35.722000 audit: BPF prog-id=275 op=LOAD Jan 20 06:48:35.722000 audit: BPF prog-id=276 op=LOAD Jan 20 06:48:35.722000 audit[5942]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5929 pid=5942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:35.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732643966613738393965643133633061396161613033326165643336 Jan 20 06:48:35.722000 audit: BPF prog-id=276 op=UNLOAD Jan 20 06:48:35.722000 audit[5942]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5929 pid=5942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:35.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732643966613738393965643133633061396161613033326165643336 Jan 20 06:48:35.722000 audit: BPF prog-id=277 op=LOAD Jan 20 06:48:35.722000 audit[5942]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5929 pid=5942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:35.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732643966613738393965643133633061396161613033326165643336 Jan 20 06:48:35.722000 audit: BPF prog-id=278 op=LOAD Jan 20 06:48:35.722000 audit[5942]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5929 pid=5942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:35.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732643966613738393965643133633061396161613033326165643336 Jan 20 06:48:35.722000 audit: BPF prog-id=278 op=UNLOAD Jan 20 06:48:35.722000 audit[5942]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5929 pid=5942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:35.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732643966613738393965643133633061396161613033326165643336 Jan 20 06:48:35.722000 audit: BPF prog-id=277 op=UNLOAD Jan 20 06:48:35.722000 audit[5942]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5929 pid=5942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:35.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732643966613738393965643133633061396161613033326165643336 Jan 20 06:48:35.722000 audit: BPF prog-id=279 op=LOAD Jan 20 06:48:35.722000 audit[5942]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5929 pid=5942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:48:35.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732643966613738393965643133633061396161613033326165643336 Jan 20 06:48:35.738182 containerd[2553]: time="2026-01-20T06:48:35.738152630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r95bt,Uid:feca3a47-a9f0-4272-a08e-b4b137171f9f,Namespace:calico-system,Attempt:0,} returns sandbox id \"72d9fa7899ed13c0a9aaa032aed36e145d7c13728b2bcb455bc3cf4e256d250a\"" Jan 20 06:48:35.743062 containerd[2553]: time="2026-01-20T06:48:35.743037229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 06:48:35.989350 containerd[2553]: time="2026-01-20T06:48:35.989280601Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:48:35.991980 containerd[2553]: time="2026-01-20T06:48:35.991896990Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 06:48:35.991980 containerd[2553]: time="2026-01-20T06:48:35.991957992Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 06:48:35.992082 kubelet[4005]: E0120 06:48:35.992055 4005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 06:48:35.992114 kubelet[4005]: E0120 06:48:35.992104 4005 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 06:48:35.992883 kubelet[4005]: E0120 06:48:35.992201 4005 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ppfnq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-r95bt_calico-system(feca3a47-a9f0-4272-a08e-b4b137171f9f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 06:48:35.995236 containerd[2553]: time="2026-01-20T06:48:35.994415250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 06:48:36.246424 containerd[2553]: time="2026-01-20T06:48:36.246297583Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:48:36.249958 containerd[2553]: time="2026-01-20T06:48:36.249924798Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 06:48:36.250006 containerd[2553]: time="2026-01-20T06:48:36.249973268Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 06:48:36.250098 kubelet[4005]: E0120 06:48:36.250058 4005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 06:48:36.250128 kubelet[4005]: E0120 06:48:36.250107 4005 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 06:48:36.250360 kubelet[4005]: E0120 06:48:36.250232 4005 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ppfnq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-r95bt_calico-system(feca3a47-a9f0-4272-a08e-b4b137171f9f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 06:48:36.251427 kubelet[4005]: E0120 06:48:36.251395 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r95bt" podUID="feca3a47-a9f0-4272-a08e-b4b137171f9f" Jan 20 06:48:36.694372 kubelet[4005]: E0120 06:48:36.694295 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r95bt" podUID="feca3a47-a9f0-4272-a08e-b4b137171f9f" Jan 20 06:48:37.449321 systemd-networkd[2169]: cali5e601fa8358: Gained IPv6LL Jan 20 06:48:37.697075 kubelet[4005]: E0120 06:48:37.696944 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r95bt" podUID="feca3a47-a9f0-4272-a08e-b4b137171f9f" Jan 20 06:48:46.548308 containerd[2553]: time="2026-01-20T06:48:46.548204854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 06:48:46.791847 containerd[2553]: time="2026-01-20T06:48:46.791806343Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:48:46.794533 containerd[2553]: time="2026-01-20T06:48:46.794504265Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 06:48:46.794593 containerd[2553]: time="2026-01-20T06:48:46.794551595Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 06:48:46.794664 kubelet[4005]: E0120 06:48:46.794630 4005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:48:46.794975 kubelet[4005]: E0120 06:48:46.794671 4005 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:48:46.794975 kubelet[4005]: E0120 06:48:46.794773 4005 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5xdd5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64f54f655c-9bp6l_calico-apiserver(2309f609-f83d-4aea-8896-a25cb505ea38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 06:48:46.795983 kubelet[4005]: E0120 06:48:46.795960 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64f54f655c-9bp6l" podUID="2309f609-f83d-4aea-8896-a25cb505ea38" Jan 20 06:48:47.548130 containerd[2553]: time="2026-01-20T06:48:47.547953269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 06:48:47.800141 containerd[2553]: time="2026-01-20T06:48:47.800061976Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:48:47.802822 containerd[2553]: time="2026-01-20T06:48:47.802799206Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 06:48:47.802863 containerd[2553]: time="2026-01-20T06:48:47.802847898Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 06:48:47.802959 kubelet[4005]: E0120 06:48:47.802928 4005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 06:48:47.803234 kubelet[4005]: E0120 06:48:47.802956 4005 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 06:48:47.803234 kubelet[4005]: E0120 06:48:47.803180 4005 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v7z5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-c4kj6_calico-system(521ce380-6f9e-4050-b213-569fcc069aed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 06:48:47.803481 containerd[2553]: time="2026-01-20T06:48:47.803404090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 06:48:47.805173 kubelet[4005]: E0120 06:48:47.805119 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c4kj6" podUID="521ce380-6f9e-4050-b213-569fcc069aed" Jan 20 06:48:48.049163 containerd[2553]: time="2026-01-20T06:48:48.049142399Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:48:48.053598 containerd[2553]: time="2026-01-20T06:48:48.053542737Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 06:48:48.053598 containerd[2553]: time="2026-01-20T06:48:48.053586749Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 06:48:48.053956 kubelet[4005]: E0120 06:48:48.053930 4005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 06:48:48.054005 kubelet[4005]: E0120 06:48:48.053957 4005 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 06:48:48.054074 kubelet[4005]: E0120 06:48:48.054034 4005 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:eaf0ffeebada4c75b92faa87b95bad61,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gsgvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dbbb7d496-gvwhx_calico-system(f5d911e0-cdad-43cd-8151-f2928352d9f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 06:48:48.055983 containerd[2553]: time="2026-01-20T06:48:48.055961036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 06:48:48.311710 containerd[2553]: time="2026-01-20T06:48:48.311642923Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:48:48.314177 containerd[2553]: time="2026-01-20T06:48:48.314138539Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 06:48:48.314245 containerd[2553]: time="2026-01-20T06:48:48.314199599Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 06:48:48.314299 kubelet[4005]: E0120 06:48:48.314280 4005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 06:48:48.314332 kubelet[4005]: E0120 06:48:48.314306 4005 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 06:48:48.314427 kubelet[4005]: E0120 06:48:48.314385 4005 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gsgvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dbbb7d496-gvwhx_calico-system(f5d911e0-cdad-43cd-8151-f2928352d9f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 06:48:48.315567 kubelet[4005]: E0120 06:48:48.315526 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dbbb7d496-gvwhx" podUID="f5d911e0-cdad-43cd-8151-f2928352d9f0" Jan 20 06:48:48.547994 containerd[2553]: time="2026-01-20T06:48:48.547852269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 06:48:48.783779 containerd[2553]: time="2026-01-20T06:48:48.783751868Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:48:48.786542 containerd[2553]: time="2026-01-20T06:48:48.786508929Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 06:48:48.786600 containerd[2553]: time="2026-01-20T06:48:48.786565021Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 06:48:48.786772 kubelet[4005]: E0120 06:48:48.786696 4005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 06:48:48.786823 kubelet[4005]: E0120 06:48:48.786776 4005 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 06:48:48.786926 kubelet[4005]: E0120 06:48:48.786878 4005 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sfbl6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-569b956df8-vdchn_calico-system(6a07f6bf-8507-4691-9e22-698d9549bb6f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 06:48:48.788532 kubelet[4005]: E0120 06:48:48.788141 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-569b956df8-vdchn" podUID="6a07f6bf-8507-4691-9e22-698d9549bb6f" Jan 20 06:48:49.548380 containerd[2553]: time="2026-01-20T06:48:49.548329696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 06:48:49.831159 containerd[2553]: time="2026-01-20T06:48:49.831041502Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:48:49.833564 containerd[2553]: time="2026-01-20T06:48:49.833535902Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 06:48:49.833621 containerd[2553]: time="2026-01-20T06:48:49.833582764Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 06:48:49.833764 kubelet[4005]: E0120 06:48:49.833722 4005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:48:49.834012 kubelet[4005]: E0120 06:48:49.833770 4005 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:48:49.834012 kubelet[4005]: E0120 06:48:49.833856 4005 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g2w5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64f54f655c-d627v_calico-apiserver(6205d977-3cd2-45d3-97f2-85111cfa22a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 06:48:49.835183 kubelet[4005]: E0120 06:48:49.835160 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64f54f655c-d627v" podUID="6205d977-3cd2-45d3-97f2-85111cfa22a7" Jan 20 06:48:52.548886 containerd[2553]: time="2026-01-20T06:48:52.548763749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 06:48:52.799166 containerd[2553]: time="2026-01-20T06:48:52.799107099Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:48:52.801445 containerd[2553]: time="2026-01-20T06:48:52.801406203Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 06:48:52.801521 containerd[2553]: time="2026-01-20T06:48:52.801466296Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 06:48:52.801558 kubelet[4005]: E0120 06:48:52.801534 4005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 06:48:52.801735 kubelet[4005]: E0120 06:48:52.801562 4005 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 06:48:52.801735 kubelet[4005]: E0120 06:48:52.801640 4005 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ppfnq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-r95bt_calico-system(feca3a47-a9f0-4272-a08e-b4b137171f9f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 06:48:52.803774 containerd[2553]: time="2026-01-20T06:48:52.803754610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 06:48:53.055269 containerd[2553]: time="2026-01-20T06:48:53.055202058Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:48:53.058949 containerd[2553]: time="2026-01-20T06:48:53.058887787Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 06:48:53.058949 containerd[2553]: time="2026-01-20T06:48:53.058924846Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 06:48:53.059057 kubelet[4005]: E0120 06:48:53.059009 4005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 06:48:53.059057 kubelet[4005]: E0120 06:48:53.059032 4005 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 06:48:53.059158 kubelet[4005]: E0120 06:48:53.059113 4005 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ppfnq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-r95bt_calico-system(feca3a47-a9f0-4272-a08e-b4b137171f9f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 06:48:53.060433 kubelet[4005]: E0120 06:48:53.060386 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r95bt" podUID="feca3a47-a9f0-4272-a08e-b4b137171f9f" Jan 20 06:48:58.548491 kubelet[4005]: E0120 06:48:58.548434 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c4kj6" podUID="521ce380-6f9e-4050-b213-569fcc069aed" Jan 20 06:48:58.550103 kubelet[4005]: E0120 06:48:58.550056 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dbbb7d496-gvwhx" podUID="f5d911e0-cdad-43cd-8151-f2928352d9f0" Jan 20 06:49:00.548690 kubelet[4005]: E0120 06:49:00.548330 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64f54f655c-9bp6l" podUID="2309f609-f83d-4aea-8896-a25cb505ea38" Jan 20 06:49:01.550264 kubelet[4005]: E0120 06:49:01.548831 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64f54f655c-d627v" podUID="6205d977-3cd2-45d3-97f2-85111cfa22a7" Jan 20 06:49:04.550334 kubelet[4005]: E0120 06:49:04.550293 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-569b956df8-vdchn" podUID="6a07f6bf-8507-4691-9e22-698d9549bb6f" Jan 20 06:49:05.549168 kubelet[4005]: E0120 06:49:05.549091 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r95bt" podUID="feca3a47-a9f0-4272-a08e-b4b137171f9f" Jan 20 06:49:10.553482 containerd[2553]: time="2026-01-20T06:49:10.552253887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 06:49:10.839568 containerd[2553]: time="2026-01-20T06:49:10.839455287Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:49:10.842282 containerd[2553]: time="2026-01-20T06:49:10.842185360Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 06:49:10.842282 containerd[2553]: time="2026-01-20T06:49:10.842259536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 06:49:10.842541 kubelet[4005]: E0120 06:49:10.842508 4005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 06:49:10.842955 kubelet[4005]: E0120 06:49:10.842806 4005 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 06:49:10.842955 kubelet[4005]: E0120 06:49:10.842919 4005 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:eaf0ffeebada4c75b92faa87b95bad61,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gsgvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dbbb7d496-gvwhx_calico-system(f5d911e0-cdad-43cd-8151-f2928352d9f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 06:49:10.845154 containerd[2553]: time="2026-01-20T06:49:10.845129542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 06:49:11.135248 containerd[2553]: time="2026-01-20T06:49:11.135016850Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:49:11.138003 containerd[2553]: time="2026-01-20T06:49:11.137929815Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 06:49:11.138003 containerd[2553]: time="2026-01-20T06:49:11.137971880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 06:49:11.138114 kubelet[4005]: E0120 06:49:11.138072 4005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 06:49:11.138114 kubelet[4005]: E0120 06:49:11.138107 4005 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 06:49:11.138254 kubelet[4005]: E0120 06:49:11.138203 4005 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gsgvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dbbb7d496-gvwhx_calico-system(f5d911e0-cdad-43cd-8151-f2928352d9f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 06:49:11.139677 kubelet[4005]: E0120 06:49:11.139638 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dbbb7d496-gvwhx" podUID="f5d911e0-cdad-43cd-8151-f2928352d9f0" Jan 20 06:49:11.548557 containerd[2553]: time="2026-01-20T06:49:11.548355514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 06:49:11.798757 containerd[2553]: time="2026-01-20T06:49:11.798699553Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:49:11.801581 containerd[2553]: time="2026-01-20T06:49:11.801537310Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 06:49:11.801697 containerd[2553]: time="2026-01-20T06:49:11.801541887Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 06:49:11.802205 kubelet[4005]: E0120 06:49:11.801995 4005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 06:49:11.802205 kubelet[4005]: E0120 06:49:11.802028 4005 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 06:49:11.802205 kubelet[4005]: E0120 06:49:11.802155 4005 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v7z5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-c4kj6_calico-system(521ce380-6f9e-4050-b213-569fcc069aed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 06:49:11.803312 kubelet[4005]: E0120 06:49:11.803264 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c4kj6" podUID="521ce380-6f9e-4050-b213-569fcc069aed" Jan 20 06:49:15.549479 containerd[2553]: time="2026-01-20T06:49:15.549299817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 06:49:15.794703 containerd[2553]: time="2026-01-20T06:49:15.794662563Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:49:15.797802 containerd[2553]: time="2026-01-20T06:49:15.797664144Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 06:49:15.798063 containerd[2553]: time="2026-01-20T06:49:15.797777044Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 06:49:15.798309 kubelet[4005]: E0120 06:49:15.798272 4005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:49:15.799198 kubelet[4005]: E0120 06:49:15.798320 4005 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:49:15.799274 containerd[2553]: time="2026-01-20T06:49:15.798821746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 06:49:15.799309 kubelet[4005]: E0120 06:49:15.799270 4005 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5xdd5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64f54f655c-9bp6l_calico-apiserver(2309f609-f83d-4aea-8896-a25cb505ea38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 06:49:15.801284 kubelet[4005]: E0120 06:49:15.800729 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64f54f655c-9bp6l" podUID="2309f609-f83d-4aea-8896-a25cb505ea38" Jan 20 06:49:16.051753 containerd[2553]: time="2026-01-20T06:49:16.051508141Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:49:16.054075 containerd[2553]: time="2026-01-20T06:49:16.054049638Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 06:49:16.054163 containerd[2553]: time="2026-01-20T06:49:16.054103647Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 06:49:16.054243 kubelet[4005]: E0120 06:49:16.054205 4005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 06:49:16.054288 kubelet[4005]: E0120 06:49:16.054250 4005 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 06:49:16.054442 kubelet[4005]: E0120 06:49:16.054411 4005 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sfbl6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-569b956df8-vdchn_calico-system(6a07f6bf-8507-4691-9e22-698d9549bb6f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 06:49:16.054868 containerd[2553]: time="2026-01-20T06:49:16.054848173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 06:49:16.056201 kubelet[4005]: E0120 06:49:16.056174 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-569b956df8-vdchn" podUID="6a07f6bf-8507-4691-9e22-698d9549bb6f" Jan 20 06:49:16.298259 containerd[2553]: time="2026-01-20T06:49:16.298145695Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:49:16.300833 containerd[2553]: time="2026-01-20T06:49:16.300797434Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 06:49:16.300940 containerd[2553]: time="2026-01-20T06:49:16.300813273Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 06:49:16.301172 kubelet[4005]: E0120 06:49:16.301098 4005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:49:16.301243 kubelet[4005]: E0120 06:49:16.301181 4005 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:49:16.301666 kubelet[4005]: E0120 06:49:16.301331 4005 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g2w5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64f54f655c-d627v_calico-apiserver(6205d977-3cd2-45d3-97f2-85111cfa22a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 06:49:16.303290 kubelet[4005]: E0120 06:49:16.302766 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64f54f655c-d627v" podUID="6205d977-3cd2-45d3-97f2-85111cfa22a7" Jan 20 06:49:17.549217 containerd[2553]: time="2026-01-20T06:49:17.549171107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 06:49:17.792046 containerd[2553]: time="2026-01-20T06:49:17.791932878Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:49:17.795031 containerd[2553]: time="2026-01-20T06:49:17.794932134Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 06:49:17.795031 containerd[2553]: time="2026-01-20T06:49:17.795008134Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 06:49:17.796085 kubelet[4005]: E0120 06:49:17.795255 4005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 06:49:17.796085 kubelet[4005]: E0120 06:49:17.795300 4005 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 06:49:17.796085 kubelet[4005]: E0120 06:49:17.795407 4005 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ppfnq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-r95bt_calico-system(feca3a47-a9f0-4272-a08e-b4b137171f9f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 06:49:17.797201 containerd[2553]: time="2026-01-20T06:49:17.797045243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 06:49:18.047971 containerd[2553]: time="2026-01-20T06:49:18.047932247Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:49:18.051298 containerd[2553]: time="2026-01-20T06:49:18.051274142Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 06:49:18.051374 containerd[2553]: time="2026-01-20T06:49:18.051327587Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 06:49:18.051500 kubelet[4005]: E0120 06:49:18.051476 4005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 06:49:18.051557 kubelet[4005]: E0120 06:49:18.051512 4005 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 06:49:18.051667 kubelet[4005]: E0120 06:49:18.051623 4005 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ppfnq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-r95bt_calico-system(feca3a47-a9f0-4272-a08e-b4b137171f9f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 06:49:18.052916 kubelet[4005]: E0120 06:49:18.052860 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r95bt" podUID="feca3a47-a9f0-4272-a08e-b4b137171f9f" Jan 20 06:49:23.548794 kubelet[4005]: E0120 06:49:23.548595 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c4kj6" podUID="521ce380-6f9e-4050-b213-569fcc069aed" Jan 20 06:49:26.553231 kubelet[4005]: E0120 06:49:26.551797 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-569b956df8-vdchn" podUID="6a07f6bf-8507-4691-9e22-698d9549bb6f" Jan 20 06:49:26.553231 kubelet[4005]: E0120 06:49:26.552533 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dbbb7d496-gvwhx" podUID="f5d911e0-cdad-43cd-8151-f2928352d9f0" Jan 20 06:49:29.548121 kubelet[4005]: E0120 06:49:29.548078 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64f54f655c-9bp6l" podUID="2309f609-f83d-4aea-8896-a25cb505ea38" Jan 20 06:49:30.549702 kubelet[4005]: E0120 06:49:30.549615 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64f54f655c-d627v" podUID="6205d977-3cd2-45d3-97f2-85111cfa22a7" Jan 20 06:49:31.551083 kubelet[4005]: E0120 06:49:31.551044 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r95bt" podUID="feca3a47-a9f0-4272-a08e-b4b137171f9f" Jan 20 06:49:32.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.22:22-10.200.16.10:36078 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:49:32.324497 systemd[1]: Started sshd@7-10.200.8.22:22-10.200.16.10:36078.service - OpenSSH per-connection server daemon (10.200.16.10:36078). Jan 20 06:49:32.325634 kernel: kauditd_printk_skb: 52 callbacks suppressed Jan 20 06:49:32.325671 kernel: audit: type=1130 audit(1768891772.323:769): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.22:22-10.200.16.10:36078 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:49:32.863000 audit[6065]: USER_ACCT pid=6065 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:32.868496 sshd-session[6065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:49:32.869120 sshd[6065]: Accepted publickey for core from 10.200.16.10 port 36078 ssh2: RSA SHA256:A6dRZ9mR0riM04bsyfD02kXdpFK/aTrU0Lz3zu/8e+M Jan 20 06:49:32.869513 kernel: audit: type=1101 audit(1768891772.863:770): pid=6065 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:32.869564 kernel: audit: type=1103 audit(1768891772.866:771): pid=6065 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:32.866000 audit[6065]: CRED_ACQ pid=6065 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:32.873092 systemd-logind[2525]: New session 11 of user core. Jan 20 06:49:32.877406 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 06:49:32.879006 kernel: audit: type=1006 audit(1768891772.866:772): pid=6065 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 20 06:49:32.866000 audit[6065]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1fc791f0 a2=3 a3=0 items=0 ppid=1 pid=6065 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:49:32.884344 kernel: audit: type=1300 audit(1768891772.866:772): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1fc791f0 a2=3 a3=0 items=0 ppid=1 pid=6065 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:49:32.866000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:49:32.887168 kernel: audit: type=1327 audit(1768891772.866:772): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:49:32.878000 audit[6065]: USER_START pid=6065 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:32.894353 kernel: audit: type=1105 audit(1768891772.878:773): pid=6065 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:32.878000 audit[6069]: CRED_ACQ pid=6069 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:32.901805 kernel: audit: type=1103 audit(1768891772.878:774): pid=6069 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:33.262599 sshd[6069]: Connection closed by 10.200.16.10 port 36078 Jan 20 06:49:33.261284 sshd-session[6065]: pam_unix(sshd:session): session closed for user core Jan 20 06:49:33.262000 audit[6065]: USER_END pid=6065 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:33.273306 kernel: audit: type=1106 audit(1768891773.262:775): pid=6065 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:33.265488 systemd-logind[2525]: Session 11 logged out. Waiting for processes to exit. Jan 20 06:49:33.266227 systemd[1]: sshd@7-10.200.8.22:22-10.200.16.10:36078.service: Deactivated successfully. Jan 20 06:49:33.267974 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 06:49:33.271941 systemd-logind[2525]: Removed session 11. Jan 20 06:49:33.262000 audit[6065]: CRED_DISP pid=6065 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:33.278221 kernel: audit: type=1104 audit(1768891773.262:776): pid=6065 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:33.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.8.22:22-10.200.16.10:36078 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:49:37.548549 kubelet[4005]: E0120 06:49:37.548487 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c4kj6" podUID="521ce380-6f9e-4050-b213-569fcc069aed" Jan 20 06:49:38.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.22:22-10.200.16.10:36080 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:49:38.371110 systemd[1]: Started sshd@8-10.200.8.22:22-10.200.16.10:36080.service - OpenSSH per-connection server daemon (10.200.16.10:36080). Jan 20 06:49:38.372648 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 06:49:38.372676 kernel: audit: type=1130 audit(1768891778.369:778): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.22:22-10.200.16.10:36080 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:49:38.915000 audit[6082]: USER_ACCT pid=6082 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:38.916525 sshd[6082]: Accepted publickey for core from 10.200.16.10 port 36080 ssh2: RSA SHA256:A6dRZ9mR0riM04bsyfD02kXdpFK/aTrU0Lz3zu/8e+M Jan 20 06:49:38.918415 sshd-session[6082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:49:38.916000 audit[6082]: CRED_ACQ pid=6082 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:38.924154 systemd-logind[2525]: New session 12 of user core. Jan 20 06:49:38.927287 kernel: audit: type=1101 audit(1768891778.915:779): pid=6082 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:38.927402 kernel: audit: type=1103 audit(1768891778.916:780): pid=6082 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:38.930264 kernel: audit: type=1006 audit(1768891778.916:781): pid=6082 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 20 06:49:38.916000 audit[6082]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd0a6eb9e0 a2=3 a3=0 items=0 ppid=1 pid=6082 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:49:38.933581 kernel: audit: type=1300 audit(1768891778.916:781): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd0a6eb9e0 a2=3 a3=0 items=0 ppid=1 pid=6082 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:49:38.933881 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 06:49:38.916000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:49:38.938226 kernel: audit: type=1327 audit(1768891778.916:781): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:49:38.937000 audit[6082]: USER_START pid=6082 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:38.942000 audit[6086]: CRED_ACQ pid=6086 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:38.950077 kernel: audit: type=1105 audit(1768891778.937:782): pid=6082 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:38.950192 kernel: audit: type=1103 audit(1768891778.942:783): pid=6086 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:39.263907 sshd[6086]: Connection closed by 10.200.16.10 port 36080 Jan 20 06:49:39.263535 sshd-session[6082]: pam_unix(sshd:session): session closed for user core Jan 20 06:49:39.263000 audit[6082]: USER_END pid=6082 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:39.267301 systemd[1]: sshd@8-10.200.8.22:22-10.200.16.10:36080.service: Deactivated successfully. Jan 20 06:49:39.269130 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 06:49:39.271959 systemd-logind[2525]: Session 12 logged out. Waiting for processes to exit. Jan 20 06:49:39.272949 systemd-logind[2525]: Removed session 12. Jan 20 06:49:39.275310 kernel: audit: type=1106 audit(1768891779.263:784): pid=6082 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:39.275502 kernel: audit: type=1104 audit(1768891779.263:785): pid=6082 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:39.263000 audit[6082]: CRED_DISP pid=6082 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:39.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.8.22:22-10.200.16.10:36080 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:49:39.549176 kubelet[4005]: E0120 06:49:39.549050 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dbbb7d496-gvwhx" podUID="f5d911e0-cdad-43cd-8151-f2928352d9f0" Jan 20 06:49:41.548525 kubelet[4005]: E0120 06:49:41.548413 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64f54f655c-9bp6l" podUID="2309f609-f83d-4aea-8896-a25cb505ea38" Jan 20 06:49:41.549239 kubelet[4005]: E0120 06:49:41.549078 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-569b956df8-vdchn" podUID="6a07f6bf-8507-4691-9e22-698d9549bb6f" Jan 20 06:49:41.549239 kubelet[4005]: E0120 06:49:41.549139 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64f54f655c-d627v" podUID="6205d977-3cd2-45d3-97f2-85111cfa22a7" Jan 20 06:49:44.389164 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 06:49:44.389379 kernel: audit: type=1130 audit(1768891784.379:787): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.22:22-10.200.16.10:35570 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:49:44.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.22:22-10.200.16.10:35570 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:49:44.377859 systemd[1]: Started sshd@9-10.200.8.22:22-10.200.16.10:35570.service - OpenSSH per-connection server daemon (10.200.16.10:35570). Jan 20 06:49:44.553781 kubelet[4005]: E0120 06:49:44.553743 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r95bt" podUID="feca3a47-a9f0-4272-a08e-b4b137171f9f" Jan 20 06:49:44.930000 audit[6101]: USER_ACCT pid=6101 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:44.933774 sshd-session[6101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:49:44.939409 kernel: audit: type=1101 audit(1768891784.930:788): pid=6101 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:44.939439 sshd[6101]: Accepted publickey for core from 10.200.16.10 port 35570 ssh2: RSA SHA256:A6dRZ9mR0riM04bsyfD02kXdpFK/aTrU0Lz3zu/8e+M Jan 20 06:49:44.930000 audit[6101]: CRED_ACQ pid=6101 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:44.941650 systemd-logind[2525]: New session 13 of user core. Jan 20 06:49:44.948568 kernel: audit: type=1103 audit(1768891784.930:789): pid=6101 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:44.948641 kernel: audit: type=1006 audit(1768891784.930:790): pid=6101 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 20 06:49:44.930000 audit[6101]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd91286fe0 a2=3 a3=0 items=0 ppid=1 pid=6101 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:49:44.950391 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 06:49:44.955006 kernel: audit: type=1300 audit(1768891784.930:790): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd91286fe0 a2=3 a3=0 items=0 ppid=1 pid=6101 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:49:44.930000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:49:44.958008 kernel: audit: type=1327 audit(1768891784.930:790): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:49:44.954000 audit[6101]: USER_START pid=6101 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:44.963045 kernel: audit: type=1105 audit(1768891784.954:791): pid=6101 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:44.965238 kernel: audit: type=1103 audit(1768891784.955:792): pid=6105 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:44.955000 audit[6105]: CRED_ACQ pid=6105 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:45.278741 sshd[6105]: Connection closed by 10.200.16.10 port 35570 Jan 20 06:49:45.279728 sshd-session[6101]: pam_unix(sshd:session): session closed for user core Jan 20 06:49:45.279000 audit[6101]: USER_END pid=6101 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:45.282686 systemd[1]: sshd@9-10.200.8.22:22-10.200.16.10:35570.service: Deactivated successfully. Jan 20 06:49:45.284773 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 06:49:45.287664 systemd-logind[2525]: Session 13 logged out. Waiting for processes to exit. Jan 20 06:49:45.288459 systemd-logind[2525]: Removed session 13. Jan 20 06:49:45.290472 kernel: audit: type=1106 audit(1768891785.279:793): pid=6101 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:45.290677 kernel: audit: type=1104 audit(1768891785.279:794): pid=6101 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:45.279000 audit[6101]: CRED_DISP pid=6101 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:45.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.8.22:22-10.200.16.10:35570 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:49:45.390290 systemd[1]: Started sshd@10-10.200.8.22:22-10.200.16.10:35576.service - OpenSSH per-connection server daemon (10.200.16.10:35576). Jan 20 06:49:45.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.8.22:22-10.200.16.10:35576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:49:45.925000 audit[6118]: USER_ACCT pid=6118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:45.927382 sshd[6118]: Accepted publickey for core from 10.200.16.10 port 35576 ssh2: RSA SHA256:A6dRZ9mR0riM04bsyfD02kXdpFK/aTrU0Lz3zu/8e+M Jan 20 06:49:45.927000 audit[6118]: CRED_ACQ pid=6118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:45.927000 audit[6118]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd58226970 a2=3 a3=0 items=0 ppid=1 pid=6118 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:49:45.927000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:49:45.929466 sshd-session[6118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:49:45.933125 systemd-logind[2525]: New session 14 of user core. Jan 20 06:49:45.938881 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 06:49:45.943000 audit[6118]: USER_START pid=6118 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:45.944000 audit[6122]: CRED_ACQ pid=6122 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:46.332333 sshd[6122]: Connection closed by 10.200.16.10 port 35576 Jan 20 06:49:46.333691 sshd-session[6118]: pam_unix(sshd:session): session closed for user core Jan 20 06:49:46.335000 audit[6118]: USER_END pid=6118 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:46.336000 audit[6118]: CRED_DISP pid=6118 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:46.338070 systemd-logind[2525]: Session 14 logged out. Waiting for processes to exit. Jan 20 06:49:46.339876 systemd[1]: sshd@10-10.200.8.22:22-10.200.16.10:35576.service: Deactivated successfully. Jan 20 06:49:46.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.8.22:22-10.200.16.10:35576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:49:46.341573 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 06:49:46.343601 systemd-logind[2525]: Removed session 14. Jan 20 06:49:46.442887 systemd[1]: Started sshd@11-10.200.8.22:22-10.200.16.10:35590.service - OpenSSH per-connection server daemon (10.200.16.10:35590). Jan 20 06:49:46.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.8.22:22-10.200.16.10:35590 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:49:46.986000 audit[6132]: USER_ACCT pid=6132 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:46.986870 sshd[6132]: Accepted publickey for core from 10.200.16.10 port 35590 ssh2: RSA SHA256:A6dRZ9mR0riM04bsyfD02kXdpFK/aTrU0Lz3zu/8e+M Jan 20 06:49:46.987000 audit[6132]: CRED_ACQ pid=6132 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:46.987000 audit[6132]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff600e3730 a2=3 a3=0 items=0 ppid=1 pid=6132 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:49:46.987000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:49:46.988555 sshd-session[6132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:49:46.993404 systemd-logind[2525]: New session 15 of user core. Jan 20 06:49:46.997364 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 06:49:46.999000 audit[6132]: USER_START pid=6132 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:47.000000 audit[6138]: CRED_ACQ pid=6138 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:47.334312 sshd[6138]: Connection closed by 10.200.16.10 port 35590 Jan 20 06:49:47.336320 sshd-session[6132]: pam_unix(sshd:session): session closed for user core Jan 20 06:49:47.337000 audit[6132]: USER_END pid=6132 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:47.337000 audit[6132]: CRED_DISP pid=6132 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:47.339374 systemd[1]: sshd@11-10.200.8.22:22-10.200.16.10:35590.service: Deactivated successfully. Jan 20 06:49:47.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.8.22:22-10.200.16.10:35590 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:49:47.341322 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 06:49:47.344941 systemd-logind[2525]: Session 15 logged out. Waiting for processes to exit. Jan 20 06:49:47.346406 systemd-logind[2525]: Removed session 15. Jan 20 06:49:49.548474 kubelet[4005]: E0120 06:49:49.548436 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c4kj6" podUID="521ce380-6f9e-4050-b213-569fcc069aed" Jan 20 06:49:52.447484 systemd[1]: Started sshd@12-10.200.8.22:22-10.200.16.10:50804.service - OpenSSH per-connection server daemon (10.200.16.10:50804). Jan 20 06:49:52.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.22:22-10.200.16.10:50804 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:49:52.450315 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 20 06:49:52.450370 kernel: audit: type=1130 audit(1768891792.446:814): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.22:22-10.200.16.10:50804 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:49:52.550524 kubelet[4005]: E0120 06:49:52.550495 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64f54f655c-9bp6l" podUID="2309f609-f83d-4aea-8896-a25cb505ea38" Jan 20 06:49:52.551485 kubelet[4005]: E0120 06:49:52.551455 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-569b956df8-vdchn" podUID="6a07f6bf-8507-4691-9e22-698d9549bb6f" Jan 20 06:49:52.991197 sshd[6161]: Accepted publickey for core from 10.200.16.10 port 50804 ssh2: RSA SHA256:A6dRZ9mR0riM04bsyfD02kXdpFK/aTrU0Lz3zu/8e+M Jan 20 06:49:52.989000 audit[6161]: USER_ACCT pid=6161 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:52.993324 sshd-session[6161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:49:52.998236 kernel: audit: type=1101 audit(1768891792.989:815): pid=6161 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:52.991000 audit[6161]: CRED_ACQ pid=6161 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:53.006240 kernel: audit: type=1103 audit(1768891792.991:816): pid=6161 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:53.011228 kernel: audit: type=1006 audit(1768891792.991:817): pid=6161 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 20 06:49:53.011258 systemd-logind[2525]: New session 16 of user core. Jan 20 06:49:52.991000 audit[6161]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe94b95f70 a2=3 a3=0 items=0 ppid=1 pid=6161 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:49:52.991000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:49:53.019819 kernel: audit: type=1300 audit(1768891792.991:817): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe94b95f70 a2=3 a3=0 items=0 ppid=1 pid=6161 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:49:53.019864 kernel: audit: type=1327 audit(1768891792.991:817): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:49:53.020399 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 06:49:53.021000 audit[6161]: USER_START pid=6161 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:53.023000 audit[6165]: CRED_ACQ pid=6165 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:53.032728 kernel: audit: type=1105 audit(1768891793.021:818): pid=6161 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:53.032824 kernel: audit: type=1103 audit(1768891793.023:819): pid=6165 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:53.335252 sshd[6165]: Connection closed by 10.200.16.10 port 50804 Jan 20 06:49:53.336330 sshd-session[6161]: pam_unix(sshd:session): session closed for user core Jan 20 06:49:53.336000 audit[6161]: USER_END pid=6161 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:53.339694 systemd-logind[2525]: Session 16 logged out. Waiting for processes to exit. Jan 20 06:49:53.341447 systemd[1]: sshd@12-10.200.8.22:22-10.200.16.10:50804.service: Deactivated successfully. Jan 20 06:49:53.343402 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 06:49:53.345196 systemd-logind[2525]: Removed session 16. Jan 20 06:49:53.336000 audit[6161]: CRED_DISP pid=6161 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:53.351270 kernel: audit: type=1106 audit(1768891793.336:820): pid=6161 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:53.351366 kernel: audit: type=1104 audit(1768891793.336:821): pid=6161 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:53.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.8.22:22-10.200.16.10:50804 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:49:53.548518 containerd[2553]: time="2026-01-20T06:49:53.548469095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 06:49:53.797764 containerd[2553]: time="2026-01-20T06:49:53.797735643Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:49:53.800805 containerd[2553]: time="2026-01-20T06:49:53.800767157Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 06:49:53.800887 containerd[2553]: time="2026-01-20T06:49:53.800831402Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 06:49:53.801354 kubelet[4005]: E0120 06:49:53.801321 4005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 06:49:53.801561 kubelet[4005]: E0120 06:49:53.801366 4005 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 06:49:53.801561 kubelet[4005]: E0120 06:49:53.801462 4005 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:eaf0ffeebada4c75b92faa87b95bad61,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gsgvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dbbb7d496-gvwhx_calico-system(f5d911e0-cdad-43cd-8151-f2928352d9f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 06:49:53.804070 containerd[2553]: time="2026-01-20T06:49:53.804046986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 06:49:54.049122 containerd[2553]: time="2026-01-20T06:49:54.048984842Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:49:54.052852 containerd[2553]: time="2026-01-20T06:49:54.052756915Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 06:49:54.052852 containerd[2553]: time="2026-01-20T06:49:54.052830715Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 06:49:54.053129 kubelet[4005]: E0120 06:49:54.053064 4005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 06:49:54.053129 kubelet[4005]: E0120 06:49:54.053099 4005 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 06:49:54.053335 kubelet[4005]: E0120 06:49:54.053299 4005 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gsgvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-dbbb7d496-gvwhx_calico-system(f5d911e0-cdad-43cd-8151-f2928352d9f0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 06:49:54.054716 kubelet[4005]: E0120 06:49:54.054683 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dbbb7d496-gvwhx" podUID="f5d911e0-cdad-43cd-8151-f2928352d9f0" Jan 20 06:49:56.550404 kubelet[4005]: E0120 06:49:56.550323 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r95bt" podUID="feca3a47-a9f0-4272-a08e-b4b137171f9f" Jan 20 06:49:56.550916 containerd[2553]: time="2026-01-20T06:49:56.550695457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 06:49:56.793798 containerd[2553]: time="2026-01-20T06:49:56.793679659Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:49:56.796389 containerd[2553]: time="2026-01-20T06:49:56.796344970Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 06:49:56.796432 containerd[2553]: time="2026-01-20T06:49:56.796412781Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 06:49:56.796521 kubelet[4005]: E0120 06:49:56.796491 4005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:49:56.796564 kubelet[4005]: E0120 06:49:56.796529 4005 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:49:56.796693 kubelet[4005]: E0120 06:49:56.796663 4005 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g2w5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64f54f655c-d627v_calico-apiserver(6205d977-3cd2-45d3-97f2-85111cfa22a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 06:49:56.797935 kubelet[4005]: E0120 06:49:56.797904 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64f54f655c-d627v" podUID="6205d977-3cd2-45d3-97f2-85111cfa22a7" Jan 20 06:49:58.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.22:22-10.200.16.10:50816 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:49:58.446653 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 06:49:58.446685 kernel: audit: type=1130 audit(1768891798.444:823): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.22:22-10.200.16.10:50816 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:49:58.445375 systemd[1]: Started sshd@13-10.200.8.22:22-10.200.16.10:50816.service - OpenSSH per-connection server daemon (10.200.16.10:50816). Jan 20 06:49:58.981000 audit[6179]: USER_ACCT pid=6179 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:58.988229 kernel: audit: type=1101 audit(1768891798.981:824): pid=6179 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:58.988308 sshd[6179]: Accepted publickey for core from 10.200.16.10 port 50816 ssh2: RSA SHA256:A6dRZ9mR0riM04bsyfD02kXdpFK/aTrU0Lz3zu/8e+M Jan 20 06:49:58.989163 sshd-session[6179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:49:58.987000 audit[6179]: CRED_ACQ pid=6179 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:58.994712 kernel: audit: type=1103 audit(1768891798.987:825): pid=6179 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:58.994907 kernel: audit: type=1006 audit(1768891798.987:826): pid=6179 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 20 06:49:58.987000 audit[6179]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe953575a0 a2=3 a3=0 items=0 ppid=1 pid=6179 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:49:59.003115 kernel: audit: type=1300 audit(1768891798.987:826): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe953575a0 a2=3 a3=0 items=0 ppid=1 pid=6179 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:49:59.003173 kernel: audit: type=1327 audit(1768891798.987:826): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:49:58.987000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:49:59.006459 systemd-logind[2525]: New session 17 of user core. Jan 20 06:49:59.012462 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 06:49:59.014000 audit[6179]: USER_START pid=6179 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:59.022228 kernel: audit: type=1105 audit(1768891799.014:827): pid=6179 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:59.021000 audit[6183]: CRED_ACQ pid=6183 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:59.027221 kernel: audit: type=1103 audit(1768891799.021:828): pid=6183 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:59.371665 sshd[6183]: Connection closed by 10.200.16.10 port 50816 Jan 20 06:49:59.372023 sshd-session[6179]: pam_unix(sshd:session): session closed for user core Jan 20 06:49:59.372000 audit[6179]: USER_END pid=6179 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:59.379291 kernel: audit: type=1106 audit(1768891799.372:829): pid=6179 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:59.378599 systemd[1]: sshd@13-10.200.8.22:22-10.200.16.10:50816.service: Deactivated successfully. Jan 20 06:49:59.372000 audit[6179]: CRED_DISP pid=6179 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:59.382857 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 06:49:59.384238 kernel: audit: type=1104 audit(1768891799.372:830): pid=6179 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:49:59.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.8.22:22-10.200.16.10:50816 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:49:59.384651 systemd-logind[2525]: Session 17 logged out. Waiting for processes to exit. Jan 20 06:49:59.385579 systemd-logind[2525]: Removed session 17. Jan 20 06:50:03.549515 containerd[2553]: time="2026-01-20T06:50:03.549407377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 06:50:03.796329 containerd[2553]: time="2026-01-20T06:50:03.796269442Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:50:03.800067 containerd[2553]: time="2026-01-20T06:50:03.799858479Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 06:50:03.800067 containerd[2553]: time="2026-01-20T06:50:03.799930203Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 06:50:03.800174 kubelet[4005]: E0120 06:50:03.800107 4005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 06:50:03.800174 kubelet[4005]: E0120 06:50:03.800162 4005 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 06:50:03.800928 kubelet[4005]: E0120 06:50:03.800883 4005 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sfbl6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-569b956df8-vdchn_calico-system(6a07f6bf-8507-4691-9e22-698d9549bb6f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 06:50:03.801522 containerd[2553]: time="2026-01-20T06:50:03.801270138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 06:50:03.802798 kubelet[4005]: E0120 06:50:03.802757 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-569b956df8-vdchn" podUID="6a07f6bf-8507-4691-9e22-698d9549bb6f" Jan 20 06:50:04.041064 containerd[2553]: time="2026-01-20T06:50:04.040960960Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:50:04.043768 containerd[2553]: time="2026-01-20T06:50:04.043675510Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 06:50:04.043768 containerd[2553]: time="2026-01-20T06:50:04.043744301Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 06:50:04.044024 kubelet[4005]: E0120 06:50:04.043982 4005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 06:50:04.044110 kubelet[4005]: E0120 06:50:04.044029 4005 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 06:50:04.044203 kubelet[4005]: E0120 06:50:04.044147 4005 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v7z5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-c4kj6_calico-system(521ce380-6f9e-4050-b213-569fcc069aed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 06:50:04.045848 kubelet[4005]: E0120 06:50:04.045809 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c4kj6" podUID="521ce380-6f9e-4050-b213-569fcc069aed" Jan 20 06:50:04.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.22:22-10.200.16.10:48326 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:50:04.481498 systemd[1]: Started sshd@14-10.200.8.22:22-10.200.16.10:48326.service - OpenSSH per-connection server daemon (10.200.16.10:48326). Jan 20 06:50:04.482702 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 06:50:04.482743 kernel: audit: type=1130 audit(1768891804.480:832): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.22:22-10.200.16.10:48326 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:50:05.026000 audit[6226]: USER_ACCT pid=6226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:05.029705 sshd-session[6226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:50:05.030564 sshd[6226]: Accepted publickey for core from 10.200.16.10 port 48326 ssh2: RSA SHA256:A6dRZ9mR0riM04bsyfD02kXdpFK/aTrU0Lz3zu/8e+M Jan 20 06:50:05.027000 audit[6226]: CRED_ACQ pid=6226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:05.036139 systemd-logind[2525]: New session 18 of user core. Jan 20 06:50:05.038909 kernel: audit: type=1101 audit(1768891805.026:833): pid=6226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:05.038973 kernel: audit: type=1103 audit(1768891805.027:834): pid=6226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:05.048227 kernel: audit: type=1006 audit(1768891805.027:835): pid=6226 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 20 06:50:05.048287 kernel: audit: type=1300 audit(1768891805.027:835): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd6c0f6d80 a2=3 a3=0 items=0 ppid=1 pid=6226 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:50:05.027000 audit[6226]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd6c0f6d80 a2=3 a3=0 items=0 ppid=1 pid=6226 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:50:05.046391 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 06:50:05.027000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:50:05.057230 kernel: audit: type=1327 audit(1768891805.027:835): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:50:05.056000 audit[6226]: USER_START pid=6226 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:05.059000 audit[6230]: CRED_ACQ pid=6230 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:05.069554 kernel: audit: type=1105 audit(1768891805.056:836): pid=6226 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:05.069621 kernel: audit: type=1103 audit(1768891805.059:837): pid=6230 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:05.409941 sshd[6230]: Connection closed by 10.200.16.10 port 48326 Jan 20 06:50:05.410344 sshd-session[6226]: pam_unix(sshd:session): session closed for user core Jan 20 06:50:05.410000 audit[6226]: USER_END pid=6226 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:05.414045 systemd[1]: sshd@14-10.200.8.22:22-10.200.16.10:48326.service: Deactivated successfully. Jan 20 06:50:05.416445 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 06:50:05.417256 systemd-logind[2525]: Session 18 logged out. Waiting for processes to exit. Jan 20 06:50:05.418692 systemd-logind[2525]: Removed session 18. Jan 20 06:50:05.410000 audit[6226]: CRED_DISP pid=6226 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:05.424935 kernel: audit: type=1106 audit(1768891805.410:838): pid=6226 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:05.426278 kernel: audit: type=1104 audit(1768891805.410:839): pid=6226 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:05.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.8.22:22-10.200.16.10:48326 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:50:05.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.8.22:22-10.200.16.10:48328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:50:05.518389 systemd[1]: Started sshd@15-10.200.8.22:22-10.200.16.10:48328.service - OpenSSH per-connection server daemon (10.200.16.10:48328). Jan 20 06:50:05.549425 containerd[2553]: time="2026-01-20T06:50:05.549396983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 06:50:05.839844 containerd[2553]: time="2026-01-20T06:50:05.839516638Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:50:05.842021 containerd[2553]: time="2026-01-20T06:50:05.841987887Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 06:50:05.842101 containerd[2553]: time="2026-01-20T06:50:05.842062201Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 06:50:05.842265 kubelet[4005]: E0120 06:50:05.842225 4005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:50:05.842477 kubelet[4005]: E0120 06:50:05.842273 4005 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 06:50:05.842573 kubelet[4005]: E0120 06:50:05.842525 4005 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5xdd5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64f54f655c-9bp6l_calico-apiserver(2309f609-f83d-4aea-8896-a25cb505ea38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 06:50:05.843697 kubelet[4005]: E0120 06:50:05.843670 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64f54f655c-9bp6l" podUID="2309f609-f83d-4aea-8896-a25cb505ea38" Jan 20 06:50:06.058000 audit[6242]: USER_ACCT pid=6242 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:06.060269 sshd[6242]: Accepted publickey for core from 10.200.16.10 port 48328 ssh2: RSA SHA256:A6dRZ9mR0riM04bsyfD02kXdpFK/aTrU0Lz3zu/8e+M Jan 20 06:50:06.059000 audit[6242]: CRED_ACQ pid=6242 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:06.059000 audit[6242]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd9003d420 a2=3 a3=0 items=0 ppid=1 pid=6242 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:50:06.059000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:50:06.061616 sshd-session[6242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:50:06.065253 systemd-logind[2525]: New session 19 of user core. Jan 20 06:50:06.069341 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 06:50:06.070000 audit[6242]: USER_START pid=6242 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:06.071000 audit[6247]: CRED_ACQ pid=6247 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:06.462377 sshd[6247]: Connection closed by 10.200.16.10 port 48328 Jan 20 06:50:06.462728 sshd-session[6242]: pam_unix(sshd:session): session closed for user core Jan 20 06:50:06.462000 audit[6242]: USER_END pid=6242 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:06.462000 audit[6242]: CRED_DISP pid=6242 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:06.465311 systemd[1]: sshd@15-10.200.8.22:22-10.200.16.10:48328.service: Deactivated successfully. Jan 20 06:50:06.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.8.22:22-10.200.16.10:48328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:50:06.467301 systemd-logind[2525]: Session 19 logged out. Waiting for processes to exit. Jan 20 06:50:06.467467 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 06:50:06.469766 systemd-logind[2525]: Removed session 19. Jan 20 06:50:06.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.8.22:22-10.200.16.10:48338 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:50:06.577456 systemd[1]: Started sshd@16-10.200.8.22:22-10.200.16.10:48338.service - OpenSSH per-connection server daemon (10.200.16.10:48338). Jan 20 06:50:07.122000 audit[6257]: USER_ACCT pid=6257 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:07.123771 sshd[6257]: Accepted publickey for core from 10.200.16.10 port 48338 ssh2: RSA SHA256:A6dRZ9mR0riM04bsyfD02kXdpFK/aTrU0Lz3zu/8e+M Jan 20 06:50:07.123000 audit[6257]: CRED_ACQ pid=6257 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:07.123000 audit[6257]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc461f3360 a2=3 a3=0 items=0 ppid=1 pid=6257 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:50:07.123000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:50:07.125552 sshd-session[6257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:50:07.130587 systemd-logind[2525]: New session 20 of user core. Jan 20 06:50:07.136375 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 06:50:07.137000 audit[6257]: USER_START pid=6257 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:07.139000 audit[6261]: CRED_ACQ pid=6261 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:07.550424 kubelet[4005]: E0120 06:50:07.549717 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dbbb7d496-gvwhx" podUID="f5d911e0-cdad-43cd-8151-f2928352d9f0" Jan 20 06:50:07.550424 kubelet[4005]: E0120 06:50:07.549790 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64f54f655c-d627v" podUID="6205d977-3cd2-45d3-97f2-85111cfa22a7" Jan 20 06:50:07.550806 containerd[2553]: time="2026-01-20T06:50:07.549903245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 06:50:07.776000 audit[6285]: NETFILTER_CFG table=filter:143 family=2 entries=26 op=nft_register_rule pid=6285 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:50:07.776000 audit[6285]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffdc8fe4a30 a2=0 a3=7ffdc8fe4a1c items=0 ppid=4134 pid=6285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:50:07.776000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:50:07.781000 audit[6285]: NETFILTER_CFG table=nat:144 family=2 entries=20 op=nft_register_rule pid=6285 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:50:07.781000 audit[6285]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffdc8fe4a30 a2=0 a3=0 items=0 ppid=4134 pid=6285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:50:07.781000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:50:07.794000 audit[6287]: NETFILTER_CFG table=filter:145 family=2 entries=38 op=nft_register_rule pid=6287 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:50:07.794000 audit[6287]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff7db98dd0 a2=0 a3=7fff7db98dbc items=0 ppid=4134 pid=6287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:50:07.794000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:50:07.799000 audit[6287]: NETFILTER_CFG table=nat:146 family=2 entries=20 op=nft_register_rule pid=6287 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:50:07.799000 audit[6287]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff7db98dd0 a2=0 a3=0 items=0 ppid=4134 pid=6287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:50:07.799000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:50:07.804785 containerd[2553]: time="2026-01-20T06:50:07.804689353Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:50:07.807893 containerd[2553]: time="2026-01-20T06:50:07.807453500Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 06:50:07.807893 containerd[2553]: time="2026-01-20T06:50:07.807517935Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 06:50:07.808002 kubelet[4005]: E0120 06:50:07.807959 4005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 06:50:07.808002 kubelet[4005]: E0120 06:50:07.807994 4005 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 06:50:07.808135 kubelet[4005]: E0120 06:50:07.808101 4005 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ppfnq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-r95bt_calico-system(feca3a47-a9f0-4272-a08e-b4b137171f9f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 06:50:07.810034 containerd[2553]: time="2026-01-20T06:50:07.810015290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 06:50:07.886352 sshd[6261]: Connection closed by 10.200.16.10 port 48338 Jan 20 06:50:07.886937 sshd-session[6257]: pam_unix(sshd:session): session closed for user core Jan 20 06:50:07.886000 audit[6257]: USER_END pid=6257 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:07.886000 audit[6257]: CRED_DISP pid=6257 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:07.891645 systemd[1]: sshd@16-10.200.8.22:22-10.200.16.10:48338.service: Deactivated successfully. Jan 20 06:50:07.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.8.22:22-10.200.16.10:48338 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:50:07.894875 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 06:50:07.896045 systemd-logind[2525]: Session 20 logged out. Waiting for processes to exit. Jan 20 06:50:07.897827 systemd-logind[2525]: Removed session 20. Jan 20 06:50:07.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.8.22:22-10.200.16.10:48354 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:50:07.995813 systemd[1]: Started sshd@17-10.200.8.22:22-10.200.16.10:48354.service - OpenSSH per-connection server daemon (10.200.16.10:48354). Jan 20 06:50:08.064789 containerd[2553]: time="2026-01-20T06:50:08.064513434Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 06:50:08.068571 containerd[2553]: time="2026-01-20T06:50:08.068537636Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 06:50:08.068657 containerd[2553]: time="2026-01-20T06:50:08.068597540Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 06:50:08.068739 kubelet[4005]: E0120 06:50:08.068685 4005 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 06:50:08.068779 kubelet[4005]: E0120 06:50:08.068747 4005 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 06:50:08.069019 kubelet[4005]: E0120 06:50:08.068882 4005 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ppfnq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-r95bt_calico-system(feca3a47-a9f0-4272-a08e-b4b137171f9f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 06:50:08.070074 kubelet[4005]: E0120 06:50:08.070035 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r95bt" podUID="feca3a47-a9f0-4272-a08e-b4b137171f9f" Jan 20 06:50:08.525000 audit[6292]: USER_ACCT pid=6292 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:08.526363 sshd[6292]: Accepted publickey for core from 10.200.16.10 port 48354 ssh2: RSA SHA256:A6dRZ9mR0riM04bsyfD02kXdpFK/aTrU0Lz3zu/8e+M Jan 20 06:50:08.526000 audit[6292]: CRED_ACQ pid=6292 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:08.526000 audit[6292]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce6a24110 a2=3 a3=0 items=0 ppid=1 pid=6292 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:50:08.526000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:50:08.528171 sshd-session[6292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:50:08.534386 systemd-logind[2525]: New session 21 of user core. Jan 20 06:50:08.541387 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 06:50:08.544000 audit[6292]: USER_START pid=6292 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:08.546000 audit[6296]: CRED_ACQ pid=6296 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:08.964260 sshd[6296]: Connection closed by 10.200.16.10 port 48354 Jan 20 06:50:08.964963 sshd-session[6292]: pam_unix(sshd:session): session closed for user core Jan 20 06:50:08.965000 audit[6292]: USER_END pid=6292 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:08.965000 audit[6292]: CRED_DISP pid=6292 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:08.968141 systemd-logind[2525]: Session 21 logged out. Waiting for processes to exit. Jan 20 06:50:08.968282 systemd[1]: sshd@17-10.200.8.22:22-10.200.16.10:48354.service: Deactivated successfully. Jan 20 06:50:08.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.8.22:22-10.200.16.10:48354 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:50:08.970042 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 06:50:08.971655 systemd-logind[2525]: Removed session 21. Jan 20 06:50:09.073703 systemd[1]: Started sshd@18-10.200.8.22:22-10.200.16.10:48368.service - OpenSSH per-connection server daemon (10.200.16.10:48368). Jan 20 06:50:09.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.22:22-10.200.16.10:48368 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:50:09.609000 audit[6306]: USER_ACCT pid=6306 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:09.609423 sshd[6306]: Accepted publickey for core from 10.200.16.10 port 48368 ssh2: RSA SHA256:A6dRZ9mR0riM04bsyfD02kXdpFK/aTrU0Lz3zu/8e+M Jan 20 06:50:09.615262 kernel: kauditd_printk_skb: 47 callbacks suppressed Jan 20 06:50:09.615327 kernel: audit: type=1101 audit(1768891809.609:873): pid=6306 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:09.613503 sshd-session[6306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:50:09.620011 kernel: audit: type=1103 audit(1768891809.610:874): pid=6306 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:09.610000 audit[6306]: CRED_ACQ pid=6306 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:09.622826 kernel: audit: type=1006 audit(1768891809.610:875): pid=6306 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 20 06:50:09.622877 kernel: audit: type=1300 audit(1768891809.610:875): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc00fc1410 a2=3 a3=0 items=0 ppid=1 pid=6306 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:50:09.610000 audit[6306]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc00fc1410 a2=3 a3=0 items=0 ppid=1 pid=6306 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:50:09.620452 systemd-logind[2525]: New session 22 of user core. Jan 20 06:50:09.610000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:50:09.628404 kernel: audit: type=1327 audit(1768891809.610:875): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:50:09.629423 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 06:50:09.631000 audit[6306]: USER_START pid=6306 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:09.634000 audit[6310]: CRED_ACQ pid=6310 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:09.639245 kernel: audit: type=1105 audit(1768891809.631:876): pid=6306 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:09.639299 kernel: audit: type=1103 audit(1768891809.634:877): pid=6310 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:09.973260 sshd[6310]: Connection closed by 10.200.16.10 port 48368 Jan 20 06:50:09.974737 sshd-session[6306]: pam_unix(sshd:session): session closed for user core Jan 20 06:50:09.975000 audit[6306]: USER_END pid=6306 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:09.977697 systemd[1]: sshd@18-10.200.8.22:22-10.200.16.10:48368.service: Deactivated successfully. Jan 20 06:50:09.980022 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 06:50:09.980986 systemd-logind[2525]: Session 22 logged out. Waiting for processes to exit. Jan 20 06:50:09.983546 systemd-logind[2525]: Removed session 22. Jan 20 06:50:09.985246 kernel: audit: type=1106 audit(1768891809.975:878): pid=6306 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:09.985440 kernel: audit: type=1104 audit(1768891809.975:879): pid=6306 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:09.975000 audit[6306]: CRED_DISP pid=6306 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:09.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.22:22-10.200.16.10:48368 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:50:09.991218 kernel: audit: type=1131 audit(1768891809.975:880): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.8.22:22-10.200.16.10:48368 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:50:15.087403 systemd[1]: Started sshd@19-10.200.8.22:22-10.200.16.10:37574.service - OpenSSH per-connection server daemon (10.200.16.10:37574). Jan 20 06:50:15.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.22:22-10.200.16.10:37574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:50:15.092237 kernel: audit: type=1130 audit(1768891815.086:881): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.22:22-10.200.16.10:37574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:50:15.548128 kubelet[4005]: E0120 06:50:15.548033 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c4kj6" podUID="521ce380-6f9e-4050-b213-569fcc069aed" Jan 20 06:50:15.629000 audit[6322]: USER_ACCT pid=6322 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:15.636699 systemd-logind[2525]: New session 23 of user core. Jan 20 06:50:15.632335 sshd-session[6322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:50:15.638926 sshd[6322]: Accepted publickey for core from 10.200.16.10 port 37574 ssh2: RSA SHA256:A6dRZ9mR0riM04bsyfD02kXdpFK/aTrU0Lz3zu/8e+M Jan 20 06:50:15.643286 kernel: audit: type=1101 audit(1768891815.629:882): pid=6322 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:15.643355 kernel: audit: type=1103 audit(1768891815.629:883): pid=6322 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:15.629000 audit[6322]: CRED_ACQ pid=6322 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:15.645749 kernel: audit: type=1006 audit(1768891815.629:884): pid=6322 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 20 06:50:15.646754 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 20 06:50:15.629000 audit[6322]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb046b230 a2=3 a3=0 items=0 ppid=1 pid=6322 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:50:15.629000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:50:15.656301 kernel: audit: type=1300 audit(1768891815.629:884): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb046b230 a2=3 a3=0 items=0 ppid=1 pid=6322 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:50:15.656398 kernel: audit: type=1327 audit(1768891815.629:884): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:50:15.647000 audit[6322]: USER_START pid=6322 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:15.661284 kernel: audit: type=1105 audit(1768891815.647:885): pid=6322 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:15.647000 audit[6326]: CRED_ACQ pid=6326 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:15.666016 kernel: audit: type=1103 audit(1768891815.647:886): pid=6326 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:15.975023 sshd[6326]: Connection closed by 10.200.16.10 port 37574 Jan 20 06:50:15.975674 sshd-session[6322]: pam_unix(sshd:session): session closed for user core Jan 20 06:50:15.975000 audit[6322]: USER_END pid=6322 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:15.978582 systemd-logind[2525]: Session 23 logged out. Waiting for processes to exit. Jan 20 06:50:15.980134 systemd[1]: sshd@19-10.200.8.22:22-10.200.16.10:37574.service: Deactivated successfully. Jan 20 06:50:15.982383 systemd[1]: session-23.scope: Deactivated successfully. Jan 20 06:50:15.984496 systemd-logind[2525]: Removed session 23. Jan 20 06:50:15.975000 audit[6322]: CRED_DISP pid=6322 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:15.988338 kernel: audit: type=1106 audit(1768891815.975:887): pid=6322 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:15.988385 kernel: audit: type=1104 audit(1768891815.975:888): pid=6322 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:15.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.8.22:22-10.200.16.10:37574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:50:16.615000 audit[6340]: NETFILTER_CFG table=filter:147 family=2 entries=26 op=nft_register_rule pid=6340 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:50:16.615000 audit[6340]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcafb9d040 a2=0 a3=7ffcafb9d02c items=0 ppid=4134 pid=6340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:50:16.615000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:50:16.620000 audit[6340]: NETFILTER_CFG table=nat:148 family=2 entries=104 op=nft_register_chain pid=6340 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 06:50:16.620000 audit[6340]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffcafb9d040 a2=0 a3=7ffcafb9d02c items=0 ppid=4134 pid=6340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:50:16.620000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 06:50:18.549323 kubelet[4005]: E0120 06:50:18.549024 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-569b956df8-vdchn" podUID="6a07f6bf-8507-4691-9e22-698d9549bb6f" Jan 20 06:50:18.549738 kubelet[4005]: E0120 06:50:18.549465 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64f54f655c-9bp6l" podUID="2309f609-f83d-4aea-8896-a25cb505ea38" Jan 20 06:50:21.089622 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 20 06:50:21.090025 kernel: audit: type=1130 audit(1768891821.086:892): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.22:22-10.200.16.10:36274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:50:21.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.22:22-10.200.16.10:36274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:50:21.087626 systemd[1]: Started sshd@20-10.200.8.22:22-10.200.16.10:36274.service - OpenSSH per-connection server daemon (10.200.16.10:36274). Jan 20 06:50:21.548825 kubelet[4005]: E0120 06:50:21.548786 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64f54f655c-d627v" podUID="6205d977-3cd2-45d3-97f2-85111cfa22a7" Jan 20 06:50:21.633000 audit[6342]: USER_ACCT pid=6342 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:21.636867 sshd-session[6342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:50:21.642125 sshd[6342]: Accepted publickey for core from 10.200.16.10 port 36274 ssh2: RSA SHA256:A6dRZ9mR0riM04bsyfD02kXdpFK/aTrU0Lz3zu/8e+M Jan 20 06:50:21.642244 kernel: audit: type=1101 audit(1768891821.633:893): pid=6342 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:21.633000 audit[6342]: CRED_ACQ pid=6342 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:21.642366 systemd-logind[2525]: New session 24 of user core. Jan 20 06:50:21.649549 kernel: audit: type=1103 audit(1768891821.633:894): pid=6342 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:21.649598 kernel: audit: type=1006 audit(1768891821.633:895): pid=6342 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 20 06:50:21.651012 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 20 06:50:21.633000 audit[6342]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1a16fbf0 a2=3 a3=0 items=0 ppid=1 pid=6342 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:50:21.658230 kernel: audit: type=1300 audit(1768891821.633:895): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1a16fbf0 a2=3 a3=0 items=0 ppid=1 pid=6342 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:50:21.633000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:50:21.658000 audit[6342]: USER_START pid=6342 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:21.666848 kernel: audit: type=1327 audit(1768891821.633:895): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:50:21.666892 kernel: audit: type=1105 audit(1768891821.658:896): pid=6342 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:21.660000 audit[6346]: CRED_ACQ pid=6346 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:21.671197 kernel: audit: type=1103 audit(1768891821.660:897): pid=6346 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:21.981781 sshd[6346]: Connection closed by 10.200.16.10 port 36274 Jan 20 06:50:21.982002 sshd-session[6342]: pam_unix(sshd:session): session closed for user core Jan 20 06:50:21.982000 audit[6342]: USER_END pid=6342 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:21.988191 systemd[1]: sshd@20-10.200.8.22:22-10.200.16.10:36274.service: Deactivated successfully. Jan 20 06:50:21.991567 kernel: audit: type=1106 audit(1768891821.982:898): pid=6342 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:21.990140 systemd[1]: session-24.scope: Deactivated successfully. Jan 20 06:50:21.991795 systemd-logind[2525]: Session 24 logged out. Waiting for processes to exit. Jan 20 06:50:21.982000 audit[6342]: CRED_DISP pid=6342 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:21.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.8.22:22-10.200.16.10:36274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:50:21.995427 systemd-logind[2525]: Removed session 24. Jan 20 06:50:21.997449 kernel: audit: type=1104 audit(1768891821.982:899): pid=6342 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:22.551226 kubelet[4005]: E0120 06:50:22.551003 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dbbb7d496-gvwhx" podUID="f5d911e0-cdad-43cd-8151-f2928352d9f0" Jan 20 06:50:23.550141 kubelet[4005]: E0120 06:50:23.549427 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r95bt" podUID="feca3a47-a9f0-4272-a08e-b4b137171f9f" Jan 20 06:50:27.101379 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 06:50:27.101480 kernel: audit: type=1130 audit(1768891827.094:901): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.22:22-10.200.16.10:36278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:50:27.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.22:22-10.200.16.10:36278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:50:27.095568 systemd[1]: Started sshd@21-10.200.8.22:22-10.200.16.10:36278.service - OpenSSH per-connection server daemon (10.200.16.10:36278). Jan 20 06:50:27.634528 sshd[6358]: Accepted publickey for core from 10.200.16.10 port 36278 ssh2: RSA SHA256:A6dRZ9mR0riM04bsyfD02kXdpFK/aTrU0Lz3zu/8e+M Jan 20 06:50:27.642922 kernel: audit: type=1101 audit(1768891827.633:902): pid=6358 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:27.633000 audit[6358]: USER_ACCT pid=6358 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:27.644121 sshd-session[6358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:50:27.642000 audit[6358]: CRED_ACQ pid=6358 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:27.656239 kernel: audit: type=1103 audit(1768891827.642:903): pid=6358 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:27.659163 systemd-logind[2525]: New session 25 of user core. Jan 20 06:50:27.667306 kernel: audit: type=1006 audit(1768891827.642:904): pid=6358 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 20 06:50:27.666645 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 20 06:50:27.642000 audit[6358]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe11ef59b0 a2=3 a3=0 items=0 ppid=1 pid=6358 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:50:27.681969 kernel: audit: type=1300 audit(1768891827.642:904): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe11ef59b0 a2=3 a3=0 items=0 ppid=1 pid=6358 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:50:27.642000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:50:27.686229 kernel: audit: type=1327 audit(1768891827.642:904): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:50:27.669000 audit[6358]: USER_START pid=6358 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:27.696194 kernel: audit: type=1105 audit(1768891827.669:905): pid=6358 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:27.671000 audit[6362]: CRED_ACQ pid=6362 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:27.706227 kernel: audit: type=1103 audit(1768891827.671:906): pid=6362 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:28.009368 sshd[6362]: Connection closed by 10.200.16.10 port 36278 Jan 20 06:50:28.010338 sshd-session[6358]: pam_unix(sshd:session): session closed for user core Jan 20 06:50:28.010000 audit[6358]: USER_END pid=6358 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:28.014164 systemd[1]: sshd@21-10.200.8.22:22-10.200.16.10:36278.service: Deactivated successfully. Jan 20 06:50:28.016176 systemd[1]: session-25.scope: Deactivated successfully. Jan 20 06:50:28.017991 systemd-logind[2525]: Session 25 logged out. Waiting for processes to exit. Jan 20 06:50:28.018667 systemd-logind[2525]: Removed session 25. Jan 20 06:50:28.010000 audit[6358]: CRED_DISP pid=6358 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:28.030679 kernel: audit: type=1106 audit(1768891828.010:907): pid=6358 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:28.030716 kernel: audit: type=1104 audit(1768891828.010:908): pid=6358 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:28.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.8.22:22-10.200.16.10:36278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:50:28.548916 kubelet[4005]: E0120 06:50:28.548882 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c4kj6" podUID="521ce380-6f9e-4050-b213-569fcc069aed" Jan 20 06:50:29.548798 kubelet[4005]: E0120 06:50:29.548356 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-569b956df8-vdchn" podUID="6a07f6bf-8507-4691-9e22-698d9549bb6f" Jan 20 06:50:33.120260 systemd[1]: Started sshd@22-10.200.8.22:22-10.200.16.10:48216.service - OpenSSH per-connection server daemon (10.200.16.10:48216). Jan 20 06:50:33.126643 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 06:50:33.126674 kernel: audit: type=1130 audit(1768891833.120:910): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.22:22-10.200.16.10:48216 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:50:33.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.22:22-10.200.16.10:48216 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:50:33.549151 kubelet[4005]: E0120 06:50:33.549068 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64f54f655c-d627v" podUID="6205d977-3cd2-45d3-97f2-85111cfa22a7" Jan 20 06:50:33.549688 kubelet[4005]: E0120 06:50:33.549621 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64f54f655c-9bp6l" podUID="2309f609-f83d-4aea-8896-a25cb505ea38" Jan 20 06:50:33.658000 audit[6399]: USER_ACCT pid=6399 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:33.659237 sshd[6399]: Accepted publickey for core from 10.200.16.10 port 48216 ssh2: RSA SHA256:A6dRZ9mR0riM04bsyfD02kXdpFK/aTrU0Lz3zu/8e+M Jan 20 06:50:33.660999 sshd-session[6399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:50:33.665242 kernel: audit: type=1101 audit(1768891833.658:911): pid=6399 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:33.658000 audit[6399]: CRED_ACQ pid=6399 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:33.666426 systemd-logind[2525]: New session 26 of user core. Jan 20 06:50:33.672403 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 20 06:50:33.674662 kernel: audit: type=1103 audit(1768891833.658:912): pid=6399 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:33.674709 kernel: audit: type=1006 audit(1768891833.658:913): pid=6399 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 20 06:50:33.658000 audit[6399]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd854ca90 a2=3 a3=0 items=0 ppid=1 pid=6399 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:50:33.680340 kernel: audit: type=1300 audit(1768891833.658:913): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdd854ca90 a2=3 a3=0 items=0 ppid=1 pid=6399 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:50:33.658000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:50:33.683431 kernel: audit: type=1327 audit(1768891833.658:913): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:50:33.675000 audit[6399]: USER_START pid=6399 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:33.688564 kernel: audit: type=1105 audit(1768891833.675:914): pid=6399 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:33.692554 kernel: audit: type=1103 audit(1768891833.675:915): pid=6403 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:33.675000 audit[6403]: CRED_ACQ pid=6403 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:34.021227 sshd[6403]: Connection closed by 10.200.16.10 port 48216 Jan 20 06:50:34.022325 sshd-session[6399]: pam_unix(sshd:session): session closed for user core Jan 20 06:50:34.022000 audit[6399]: USER_END pid=6399 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:34.030223 kernel: audit: type=1106 audit(1768891834.022:916): pid=6399 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:34.030681 systemd[1]: sshd@22-10.200.8.22:22-10.200.16.10:48216.service: Deactivated successfully. Jan 20 06:50:34.035056 systemd[1]: session-26.scope: Deactivated successfully. Jan 20 06:50:34.035981 systemd-logind[2525]: Session 26 logged out. Waiting for processes to exit. Jan 20 06:50:34.041594 kernel: audit: type=1104 audit(1768891834.022:917): pid=6399 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:34.022000 audit[6399]: CRED_DISP pid=6399 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:34.043231 systemd-logind[2525]: Removed session 26. Jan 20 06:50:34.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.8.22:22-10.200.16.10:48216 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:50:34.551771 kubelet[4005]: E0120 06:50:34.551656 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-r95bt" podUID="feca3a47-a9f0-4272-a08e-b4b137171f9f" Jan 20 06:50:37.548268 kubelet[4005]: E0120 06:50:37.548174 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-dbbb7d496-gvwhx" podUID="f5d911e0-cdad-43cd-8151-f2928352d9f0" Jan 20 06:50:39.140175 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 06:50:39.140300 kernel: audit: type=1130 audit(1768891839.132:919): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.22:22-10.200.16.10:48232 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:50:39.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.22:22-10.200.16.10:48232 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:50:39.134032 systemd[1]: Started sshd@23-10.200.8.22:22-10.200.16.10:48232.service - OpenSSH per-connection server daemon (10.200.16.10:48232). Jan 20 06:50:39.688000 audit[6414]: USER_ACCT pid=6414 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:39.694228 kernel: audit: type=1101 audit(1768891839.688:920): pid=6414 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:39.692877 sshd-session[6414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:50:39.694564 sshd[6414]: Accepted publickey for core from 10.200.16.10 port 48232 ssh2: RSA SHA256:A6dRZ9mR0riM04bsyfD02kXdpFK/aTrU0Lz3zu/8e+M Jan 20 06:50:39.688000 audit[6414]: CRED_ACQ pid=6414 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:39.701287 systemd-logind[2525]: New session 27 of user core. Jan 20 06:50:39.703866 kernel: audit: type=1103 audit(1768891839.688:921): pid=6414 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:39.703913 kernel: audit: type=1006 audit(1768891839.688:922): pid=6414 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 20 06:50:39.688000 audit[6414]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff492f1c80 a2=3 a3=0 items=0 ppid=1 pid=6414 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:50:39.708648 kernel: audit: type=1300 audit(1768891839.688:922): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff492f1c80 a2=3 a3=0 items=0 ppid=1 pid=6414 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:50:39.688000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:50:39.710392 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 20 06:50:39.711995 kernel: audit: type=1327 audit(1768891839.688:922): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:50:39.712000 audit[6414]: USER_START pid=6414 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:39.717000 audit[6418]: CRED_ACQ pid=6418 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:39.721307 kernel: audit: type=1105 audit(1768891839.712:923): pid=6414 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:39.721368 kernel: audit: type=1103 audit(1768891839.717:924): pid=6418 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:40.039137 sshd[6418]: Connection closed by 10.200.16.10 port 48232 Jan 20 06:50:40.039078 sshd-session[6414]: pam_unix(sshd:session): session closed for user core Jan 20 06:50:40.039000 audit[6414]: USER_END pid=6414 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:40.039000 audit[6414]: CRED_DISP pid=6414 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:40.048115 kernel: audit: type=1106 audit(1768891840.039:925): pid=6414 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:40.048161 kernel: audit: type=1104 audit(1768891840.039:926): pid=6414 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:40.048068 systemd[1]: sshd@23-10.200.8.22:22-10.200.16.10:48232.service: Deactivated successfully. Jan 20 06:50:40.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.8.22:22-10.200.16.10:48232 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:50:40.051403 systemd[1]: session-27.scope: Deactivated successfully. Jan 20 06:50:40.052700 systemd-logind[2525]: Session 27 logged out. Waiting for processes to exit. Jan 20 06:50:40.054062 systemd-logind[2525]: Removed session 27. Jan 20 06:50:43.548828 kubelet[4005]: E0120 06:50:43.548482 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c4kj6" podUID="521ce380-6f9e-4050-b213-569fcc069aed" Jan 20 06:50:43.549514 kubelet[4005]: E0120 06:50:43.549365 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-569b956df8-vdchn" podUID="6a07f6bf-8507-4691-9e22-698d9549bb6f" Jan 20 06:50:45.194726 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 06:50:45.194823 kernel: audit: type=1130 audit(1768891845.186:928): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.8.22:22-10.200.16.10:43472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:50:45.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.8.22:22-10.200.16.10:43472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:50:45.187512 systemd[1]: Started sshd@24-10.200.8.22:22-10.200.16.10:43472.service - OpenSSH per-connection server daemon (10.200.16.10:43472). Jan 20 06:50:45.774000 audit[6432]: USER_ACCT pid=6432 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:45.775770 sshd[6432]: Accepted publickey for core from 10.200.16.10 port 43472 ssh2: RSA SHA256:A6dRZ9mR0riM04bsyfD02kXdpFK/aTrU0Lz3zu/8e+M Jan 20 06:50:45.777667 sshd-session[6432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 06:50:45.775000 audit[6432]: CRED_ACQ pid=6432 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:45.783794 systemd-logind[2525]: New session 28 of user core. Jan 20 06:50:45.786821 kernel: audit: type=1101 audit(1768891845.774:929): pid=6432 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:45.786873 kernel: audit: type=1103 audit(1768891845.775:930): pid=6432 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:45.792171 kernel: audit: type=1006 audit(1768891845.775:931): pid=6432 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jan 20 06:50:45.775000 audit[6432]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffd3991430 a2=3 a3=0 items=0 ppid=1 pid=6432 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:50:45.799318 kernel: audit: type=1300 audit(1768891845.775:931): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffd3991430 a2=3 a3=0 items=0 ppid=1 pid=6432 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 06:50:45.775000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:50:45.801423 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 20 06:50:45.803124 kernel: audit: type=1327 audit(1768891845.775:931): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 06:50:45.804000 audit[6432]: USER_START pid=6432 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:45.814241 kernel: audit: type=1105 audit(1768891845.804:932): pid=6432 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:45.813000 audit[6436]: CRED_ACQ pid=6436 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:45.819230 kernel: audit: type=1103 audit(1768891845.813:933): pid=6436 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:46.287962 sshd[6436]: Connection closed by 10.200.16.10 port 43472 Jan 20 06:50:46.289855 sshd-session[6432]: pam_unix(sshd:session): session closed for user core Jan 20 06:50:46.289000 audit[6432]: USER_END pid=6432 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:46.301284 kernel: audit: type=1106 audit(1768891846.289:934): pid=6432 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:46.304189 systemd[1]: sshd@24-10.200.8.22:22-10.200.16.10:43472.service: Deactivated successfully. Jan 20 06:50:46.289000 audit[6432]: CRED_DISP pid=6432 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:46.307761 systemd[1]: session-28.scope: Deactivated successfully. Jan 20 06:50:46.310853 systemd-logind[2525]: Session 28 logged out. Waiting for processes to exit. Jan 20 06:50:46.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.8.22:22-10.200.16.10:43472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 06:50:46.312761 kernel: audit: type=1104 audit(1768891846.289:935): pid=6432 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Jan 20 06:50:46.311858 systemd-logind[2525]: Removed session 28. Jan 20 06:50:47.550062 kubelet[4005]: E0120 06:50:47.549974 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64f54f655c-d627v" podUID="6205d977-3cd2-45d3-97f2-85111cfa22a7" Jan 20 06:50:48.549558 kubelet[4005]: E0120 06:50:48.548977 4005 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64f54f655c-9bp6l" podUID="2309f609-f83d-4aea-8896-a25cb505ea38"