Dec 16 12:57:06.134854 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:17:57 -00 2025 Dec 16 12:57:06.135681 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 16 12:57:06.135696 kernel: BIOS-provided physical RAM map: Dec 16 12:57:06.135706 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Dec 16 12:57:06.135715 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Dec 16 12:57:06.135727 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Dec 16 12:57:06.135738 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Dec 16 12:57:06.135748 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Dec 16 12:57:06.135763 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Dec 16 12:57:06.135773 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Dec 16 12:57:06.135783 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Dec 16 12:57:06.135793 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Dec 16 12:57:06.135802 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Dec 16 12:57:06.135814 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Dec 16 12:57:06.135826 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Dec 16 12:57:06.135837 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Dec 16 12:57:06.135850 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 16 12:57:06.135861 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 12:57:06.135874 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 16 12:57:06.135884 kernel: NX (Execute Disable) protection: active Dec 16 12:57:06.135894 kernel: APIC: Static calls initialized Dec 16 12:57:06.135904 kernel: e820: update [mem 0x9a13f018-0x9a148c57] usable ==> usable Dec 16 12:57:06.135915 kernel: e820: update [mem 0x9a102018-0x9a13ee57] usable ==> usable Dec 16 12:57:06.135925 kernel: extended physical RAM map: Dec 16 12:57:06.135935 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Dec 16 12:57:06.135946 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Dec 16 12:57:06.135956 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Dec 16 12:57:06.135967 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Dec 16 12:57:06.135980 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a102017] usable Dec 16 12:57:06.135990 kernel: reserve setup_data: [mem 0x000000009a102018-0x000000009a13ee57] usable Dec 16 12:57:06.136000 kernel: reserve setup_data: [mem 0x000000009a13ee58-0x000000009a13f017] usable Dec 16 12:57:06.136010 kernel: reserve setup_data: [mem 0x000000009a13f018-0x000000009a148c57] usable Dec 16 12:57:06.136020 kernel: reserve setup_data: [mem 0x000000009a148c58-0x000000009b8ecfff] usable Dec 16 12:57:06.136030 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Dec 16 12:57:06.136039 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Dec 16 12:57:06.136049 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Dec 16 12:57:06.136059 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Dec 16 12:57:06.136069 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Dec 16 12:57:06.136079 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Dec 16 12:57:06.136092 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Dec 16 12:57:06.136106 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Dec 16 12:57:06.136116 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 16 12:57:06.136127 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 12:57:06.136140 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 16 12:57:06.136150 kernel: efi: EFI v2.7 by EDK II Dec 16 12:57:06.136160 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Dec 16 12:57:06.136171 kernel: random: crng init done Dec 16 12:57:06.136181 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Dec 16 12:57:06.136191 kernel: secureboot: Secure boot enabled Dec 16 12:57:06.136201 kernel: SMBIOS 2.8 present. Dec 16 12:57:06.136211 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Dec 16 12:57:06.136222 kernel: DMI: Memory slots populated: 1/1 Dec 16 12:57:06.136232 kernel: Hypervisor detected: KVM Dec 16 12:57:06.136244 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Dec 16 12:57:06.136254 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 12:57:06.136264 kernel: kvm-clock: using sched offset of 5334109943 cycles Dec 16 12:57:06.136275 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 12:57:06.136287 kernel: tsc: Detected 2794.748 MHz processor Dec 16 12:57:06.136298 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 12:57:06.136309 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 12:57:06.136320 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Dec 16 12:57:06.136335 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 16 12:57:06.136351 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 12:57:06.136364 kernel: Using GB pages for direct mapping Dec 16 12:57:06.136375 kernel: ACPI: Early table checksum verification disabled Dec 16 12:57:06.136386 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Dec 16 12:57:06.136397 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Dec 16 12:57:06.136418 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:57:06.136430 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:57:06.136444 kernel: ACPI: FACS 0x000000009BBDD000 000040 Dec 16 12:57:06.136455 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:57:06.136467 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:57:06.136478 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:57:06.136490 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 12:57:06.136501 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Dec 16 12:57:06.136512 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Dec 16 12:57:06.136526 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Dec 16 12:57:06.136538 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Dec 16 12:57:06.136548 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Dec 16 12:57:06.136559 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Dec 16 12:57:06.136585 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Dec 16 12:57:06.136597 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Dec 16 12:57:06.136608 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Dec 16 12:57:06.136625 kernel: No NUMA configuration found Dec 16 12:57:06.136637 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Dec 16 12:57:06.136651 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Dec 16 12:57:06.136663 kernel: Zone ranges: Dec 16 12:57:06.136675 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 12:57:06.136686 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Dec 16 12:57:06.136697 kernel: Normal empty Dec 16 12:57:06.136708 kernel: Device empty Dec 16 12:57:06.136722 kernel: Movable zone start for each node Dec 16 12:57:06.136733 kernel: Early memory node ranges Dec 16 12:57:06.136744 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Dec 16 12:57:06.136755 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Dec 16 12:57:06.136766 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Dec 16 12:57:06.136777 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Dec 16 12:57:06.136788 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Dec 16 12:57:06.136802 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Dec 16 12:57:06.136814 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 12:57:06.136826 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Dec 16 12:57:06.136837 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 16 12:57:06.136848 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 16 12:57:06.136860 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Dec 16 12:57:06.136871 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Dec 16 12:57:06.136883 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 16 12:57:06.136897 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 12:57:06.136909 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 12:57:06.136920 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 16 12:57:06.136936 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 12:57:06.136948 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 12:57:06.136959 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 12:57:06.136970 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 12:57:06.136985 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 12:57:06.136996 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 16 12:57:06.137007 kernel: TSC deadline timer available Dec 16 12:57:06.137019 kernel: CPU topo: Max. logical packages: 1 Dec 16 12:57:06.137031 kernel: CPU topo: Max. logical dies: 1 Dec 16 12:57:06.137053 kernel: CPU topo: Max. dies per package: 1 Dec 16 12:57:06.137064 kernel: CPU topo: Max. threads per core: 1 Dec 16 12:57:06.137076 kernel: CPU topo: Num. cores per package: 4 Dec 16 12:57:06.137088 kernel: CPU topo: Num. threads per package: 4 Dec 16 12:57:06.137103 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Dec 16 12:57:06.137118 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 16 12:57:06.137130 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 16 12:57:06.137142 kernel: kvm-guest: setup PV sched yield Dec 16 12:57:06.137157 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Dec 16 12:57:06.137169 kernel: Booting paravirtualized kernel on KVM Dec 16 12:57:06.137181 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 12:57:06.137193 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 16 12:57:06.137205 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Dec 16 12:57:06.137217 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Dec 16 12:57:06.137229 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 16 12:57:06.137240 kernel: kvm-guest: PV spinlocks enabled Dec 16 12:57:06.137255 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 12:57:06.137269 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 16 12:57:06.137281 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 12:57:06.137294 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 12:57:06.137305 kernel: Fallback order for Node 0: 0 Dec 16 12:57:06.137317 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Dec 16 12:57:06.137332 kernel: Policy zone: DMA32 Dec 16 12:57:06.137344 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 12:57:06.137355 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 16 12:57:06.137367 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 12:57:06.137379 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 12:57:06.137391 kernel: Dynamic Preempt: voluntary Dec 16 12:57:06.137403 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 12:57:06.137433 kernel: rcu: RCU event tracing is enabled. Dec 16 12:57:06.137446 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 16 12:57:06.137458 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 12:57:06.137470 kernel: Rude variant of Tasks RCU enabled. Dec 16 12:57:06.137482 kernel: Tracing variant of Tasks RCU enabled. Dec 16 12:57:06.137494 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 12:57:06.137505 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 16 12:57:06.137517 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 12:57:06.137532 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 12:57:06.137549 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 12:57:06.137575 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 16 12:57:06.137588 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 12:57:06.137601 kernel: Console: colour dummy device 80x25 Dec 16 12:57:06.137612 kernel: printk: legacy console [ttyS0] enabled Dec 16 12:57:06.137624 kernel: ACPI: Core revision 20240827 Dec 16 12:57:06.137640 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 16 12:57:06.137652 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 12:57:06.137663 kernel: x2apic enabled Dec 16 12:57:06.137675 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 12:57:06.137687 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 16 12:57:06.137699 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 16 12:57:06.137711 kernel: kvm-guest: setup PV IPIs Dec 16 12:57:06.137726 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 16 12:57:06.137738 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Dec 16 12:57:06.137750 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 16 12:57:06.137762 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 16 12:57:06.137774 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 16 12:57:06.137786 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 16 12:57:06.137798 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 12:57:06.137817 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 12:57:06.137829 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 12:57:06.137841 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 16 12:57:06.137853 kernel: active return thunk: retbleed_return_thunk Dec 16 12:57:06.137865 kernel: RETBleed: Mitigation: untrained return thunk Dec 16 12:57:06.137877 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 16 12:57:06.137889 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 16 12:57:06.137904 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 16 12:57:06.137918 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 16 12:57:06.137929 kernel: active return thunk: srso_return_thunk Dec 16 12:57:06.137941 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 16 12:57:06.137954 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 12:57:06.137966 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 12:57:06.137978 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 12:57:06.137992 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 12:57:06.138004 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 16 12:57:06.138016 kernel: Freeing SMP alternatives memory: 32K Dec 16 12:57:06.138028 kernel: pid_max: default: 32768 minimum: 301 Dec 16 12:57:06.138039 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 12:57:06.138051 kernel: landlock: Up and running. Dec 16 12:57:06.138062 kernel: SELinux: Initializing. Dec 16 12:57:06.138077 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:57:06.138089 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:57:06.138101 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 16 12:57:06.138113 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 16 12:57:06.138125 kernel: ... version: 0 Dec 16 12:57:06.138141 kernel: ... bit width: 48 Dec 16 12:57:06.138153 kernel: ... generic registers: 6 Dec 16 12:57:06.138168 kernel: ... value mask: 0000ffffffffffff Dec 16 12:57:06.138180 kernel: ... max period: 00007fffffffffff Dec 16 12:57:06.138192 kernel: ... fixed-purpose events: 0 Dec 16 12:57:06.138204 kernel: ... event mask: 000000000000003f Dec 16 12:57:06.138215 kernel: signal: max sigframe size: 1776 Dec 16 12:57:06.138227 kernel: rcu: Hierarchical SRCU implementation. Dec 16 12:57:06.138239 kernel: rcu: Max phase no-delay instances is 400. Dec 16 12:57:06.138253 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 12:57:06.138265 kernel: smp: Bringing up secondary CPUs ... Dec 16 12:57:06.138277 kernel: smpboot: x86: Booting SMP configuration: Dec 16 12:57:06.138288 kernel: .... node #0, CPUs: #1 #2 #3 Dec 16 12:57:06.138299 kernel: smp: Brought up 1 node, 4 CPUs Dec 16 12:57:06.138311 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 16 12:57:06.138323 kernel: Memory: 2427644K/2552216K available (14336K kernel code, 2444K rwdata, 29892K rodata, 15464K init, 2576K bss, 118632K reserved, 0K cma-reserved) Dec 16 12:57:06.138338 kernel: devtmpfs: initialized Dec 16 12:57:06.138349 kernel: x86/mm: Memory block size: 128MB Dec 16 12:57:06.138360 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Dec 16 12:57:06.138371 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Dec 16 12:57:06.138383 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 12:57:06.138394 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 16 12:57:06.138417 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 12:57:06.138432 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 12:57:06.138443 kernel: audit: initializing netlink subsys (disabled) Dec 16 12:57:06.138455 kernel: audit: type=2000 audit(1765889823.080:1): state=initialized audit_enabled=0 res=1 Dec 16 12:57:06.138466 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 12:57:06.138478 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 12:57:06.138490 kernel: cpuidle: using governor menu Dec 16 12:57:06.138501 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 12:57:06.138516 kernel: dca service started, version 1.12.1 Dec 16 12:57:06.138527 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Dec 16 12:57:06.138539 kernel: PCI: Using configuration type 1 for base access Dec 16 12:57:06.138551 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 12:57:06.138590 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 12:57:06.138602 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 12:57:06.138613 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 12:57:06.138628 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 12:57:06.138639 kernel: ACPI: Added _OSI(Module Device) Dec 16 12:57:06.138651 kernel: ACPI: Added _OSI(Processor Device) Dec 16 12:57:06.138662 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 12:57:06.138674 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 12:57:06.138686 kernel: ACPI: Interpreter enabled Dec 16 12:57:06.138697 kernel: ACPI: PM: (supports S0 S5) Dec 16 12:57:06.138709 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 12:57:06.138723 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 12:57:06.138735 kernel: PCI: Using E820 reservations for host bridge windows Dec 16 12:57:06.138747 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 16 12:57:06.138759 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 12:57:06.139045 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 12:57:06.139263 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 16 12:57:06.139493 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 16 12:57:06.139511 kernel: PCI host bridge to bus 0000:00 Dec 16 12:57:06.139761 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 12:57:06.139954 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 12:57:06.140149 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 12:57:06.140346 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Dec 16 12:57:06.140554 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Dec 16 12:57:06.140769 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Dec 16 12:57:06.140960 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 12:57:06.141194 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 16 12:57:06.141374 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Dec 16 12:57:06.141557 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Dec 16 12:57:06.141752 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Dec 16 12:57:06.141915 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Dec 16 12:57:06.142097 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 16 12:57:06.142289 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 16 12:57:06.142491 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Dec 16 12:57:06.142711 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Dec 16 12:57:06.142915 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Dec 16 12:57:06.143127 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 16 12:57:06.143333 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Dec 16 12:57:06.143557 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Dec 16 12:57:06.143800 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Dec 16 12:57:06.144028 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 16 12:57:06.144233 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Dec 16 12:57:06.144457 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Dec 16 12:57:06.144682 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Dec 16 12:57:06.144888 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Dec 16 12:57:06.145118 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 16 12:57:06.145319 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 16 12:57:06.145509 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 16 12:57:06.145812 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Dec 16 12:57:06.145985 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Dec 16 12:57:06.146165 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 16 12:57:06.146331 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Dec 16 12:57:06.146343 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 12:57:06.146352 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 12:57:06.146361 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 12:57:06.146370 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 12:57:06.146378 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 16 12:57:06.146390 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 16 12:57:06.146399 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 16 12:57:06.146420 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 16 12:57:06.146432 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 16 12:57:06.146444 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 16 12:57:06.146456 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 16 12:57:06.146467 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 16 12:57:06.146481 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 16 12:57:06.146493 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 16 12:57:06.146503 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 16 12:57:06.146511 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 16 12:57:06.146520 kernel: iommu: Default domain type: Translated Dec 16 12:57:06.146528 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 12:57:06.146537 kernel: efivars: Registered efivars operations Dec 16 12:57:06.146548 kernel: PCI: Using ACPI for IRQ routing Dec 16 12:57:06.146557 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 12:57:06.146587 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Dec 16 12:57:06.146599 kernel: e820: reserve RAM buffer [mem 0x9a102018-0x9bffffff] Dec 16 12:57:06.146608 kernel: e820: reserve RAM buffer [mem 0x9a13f018-0x9bffffff] Dec 16 12:57:06.146616 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Dec 16 12:57:06.146625 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Dec 16 12:57:06.146804 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 16 12:57:06.146971 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 16 12:57:06.147135 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 16 12:57:06.147146 kernel: vgaarb: loaded Dec 16 12:57:06.147155 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 16 12:57:06.147164 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 16 12:57:06.147172 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 12:57:06.147184 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 12:57:06.147193 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 12:57:06.147201 kernel: pnp: PnP ACPI init Dec 16 12:57:06.147377 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Dec 16 12:57:06.147391 kernel: pnp: PnP ACPI: found 6 devices Dec 16 12:57:06.147399 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 12:57:06.147424 kernel: NET: Registered PF_INET protocol family Dec 16 12:57:06.147436 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 12:57:06.147449 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 12:57:06.147460 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 12:57:06.147471 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 12:57:06.147481 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 12:57:06.147490 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 12:57:06.147501 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:57:06.147509 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:57:06.147518 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 12:57:06.147526 kernel: NET: Registered PF_XDP protocol family Dec 16 12:57:06.147716 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Dec 16 12:57:06.147891 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Dec 16 12:57:06.148048 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 12:57:06.148207 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 12:57:06.148359 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 12:57:06.148537 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Dec 16 12:57:06.148807 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Dec 16 12:57:06.148962 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Dec 16 12:57:06.148974 kernel: PCI: CLS 0 bytes, default 64 Dec 16 12:57:06.148984 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Dec 16 12:57:06.148997 kernel: Initialise system trusted keyrings Dec 16 12:57:06.149006 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 12:57:06.149014 kernel: Key type asymmetric registered Dec 16 12:57:06.149023 kernel: Asymmetric key parser 'x509' registered Dec 16 12:57:06.149048 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 12:57:06.149058 kernel: io scheduler mq-deadline registered Dec 16 12:57:06.149068 kernel: io scheduler kyber registered Dec 16 12:57:06.149078 kernel: io scheduler bfq registered Dec 16 12:57:06.149087 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 12:57:06.149097 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 16 12:57:06.149106 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 16 12:57:06.149115 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 16 12:57:06.149124 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 12:57:06.149133 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 12:57:06.149144 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 12:57:06.149153 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 12:57:06.149162 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 12:57:06.149171 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 12:57:06.149343 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 16 12:57:06.149559 kernel: rtc_cmos 00:04: registered as rtc0 Dec 16 12:57:06.150057 kernel: rtc_cmos 00:04: setting system clock to 2025-12-16T12:57:04 UTC (1765889824) Dec 16 12:57:06.152692 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Dec 16 12:57:06.152732 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 16 12:57:06.152751 kernel: efifb: probing for efifb Dec 16 12:57:06.152761 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Dec 16 12:57:06.152771 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Dec 16 12:57:06.152781 kernel: efifb: scrolling: redraw Dec 16 12:57:06.152793 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 16 12:57:06.152803 kernel: Console: switching to colour frame buffer device 160x50 Dec 16 12:57:06.152815 kernel: fb0: EFI VGA frame buffer device Dec 16 12:57:06.152824 kernel: pstore: Using crash dump compression: deflate Dec 16 12:57:06.152834 kernel: pstore: Registered efi_pstore as persistent store backend Dec 16 12:57:06.152845 kernel: NET: Registered PF_INET6 protocol family Dec 16 12:57:06.152854 kernel: Segment Routing with IPv6 Dec 16 12:57:06.152864 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 12:57:06.152873 kernel: NET: Registered PF_PACKET protocol family Dec 16 12:57:06.152883 kernel: Key type dns_resolver registered Dec 16 12:57:06.152892 kernel: IPI shorthand broadcast: enabled Dec 16 12:57:06.152901 kernel: sched_clock: Marking stable (2113002927, 263386777)->(2429982986, -53593282) Dec 16 12:57:06.152912 kernel: registered taskstats version 1 Dec 16 12:57:06.152922 kernel: Loading compiled-in X.509 certificates Dec 16 12:57:06.152932 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: b90706f42f055ab9f35fc8fc29156d877adb12c4' Dec 16 12:57:06.152941 kernel: Demotion targets for Node 0: null Dec 16 12:57:06.152950 kernel: Key type .fscrypt registered Dec 16 12:57:06.152960 kernel: Key type fscrypt-provisioning registered Dec 16 12:57:06.152969 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 12:57:06.152980 kernel: ima: Allocated hash algorithm: sha1 Dec 16 12:57:06.152990 kernel: ima: No architecture policies found Dec 16 12:57:06.152999 kernel: clk: Disabling unused clocks Dec 16 12:57:06.153009 kernel: Freeing unused kernel image (initmem) memory: 15464K Dec 16 12:57:06.153018 kernel: Write protecting the kernel read-only data: 45056k Dec 16 12:57:06.153028 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Dec 16 12:57:06.153037 kernel: Run /init as init process Dec 16 12:57:06.153049 kernel: with arguments: Dec 16 12:57:06.153068 kernel: /init Dec 16 12:57:06.153077 kernel: with environment: Dec 16 12:57:06.153094 kernel: HOME=/ Dec 16 12:57:06.153103 kernel: TERM=linux Dec 16 12:57:06.153112 kernel: SCSI subsystem initialized Dec 16 12:57:06.153121 kernel: libata version 3.00 loaded. Dec 16 12:57:06.153492 kernel: ahci 0000:00:1f.2: version 3.0 Dec 16 12:57:06.153509 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 16 12:57:06.153703 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 16 12:57:06.153874 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 16 12:57:06.154077 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 16 12:57:06.154270 kernel: scsi host0: ahci Dec 16 12:57:06.154474 kernel: scsi host1: ahci Dec 16 12:57:06.154672 kernel: scsi host2: ahci Dec 16 12:57:06.154850 kernel: scsi host3: ahci Dec 16 12:57:06.155026 kernel: scsi host4: ahci Dec 16 12:57:06.155201 kernel: scsi host5: ahci Dec 16 12:57:06.155219 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Dec 16 12:57:06.155229 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Dec 16 12:57:06.155238 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Dec 16 12:57:06.155248 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Dec 16 12:57:06.155257 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Dec 16 12:57:06.155266 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Dec 16 12:57:06.155276 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 16 12:57:06.155286 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 16 12:57:06.155295 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 16 12:57:06.155304 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 16 12:57:06.155314 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 16 12:57:06.155324 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 16 12:57:06.155333 kernel: ata3.00: LPM support broken, forcing max_power Dec 16 12:57:06.155342 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 16 12:57:06.155353 kernel: ata3.00: applying bridge limits Dec 16 12:57:06.155362 kernel: ata3.00: LPM support broken, forcing max_power Dec 16 12:57:06.155371 kernel: ata3.00: configured for UDMA/100 Dec 16 12:57:06.155594 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 16 12:57:06.155793 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 16 12:57:06.155968 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Dec 16 12:57:06.155984 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 12:57:06.155994 kernel: GPT:16515071 != 27000831 Dec 16 12:57:06.156003 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 12:57:06.156012 kernel: GPT:16515071 != 27000831 Dec 16 12:57:06.156021 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 12:57:06.156030 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 12:57:06.156214 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 16 12:57:06.156229 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 16 12:57:06.156467 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 16 12:57:06.156485 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 12:57:06.156494 kernel: device-mapper: uevent: version 1.0.3 Dec 16 12:57:06.156504 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 12:57:06.156513 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Dec 16 12:57:06.156522 kernel: raid6: avx2x4 gen() 29367 MB/s Dec 16 12:57:06.156534 kernel: raid6: avx2x2 gen() 30398 MB/s Dec 16 12:57:06.156543 kernel: raid6: avx2x1 gen() 25483 MB/s Dec 16 12:57:06.156553 kernel: raid6: using algorithm avx2x2 gen() 30398 MB/s Dec 16 12:57:06.156576 kernel: raid6: .... xor() 19468 MB/s, rmw enabled Dec 16 12:57:06.156585 kernel: raid6: using avx2x2 recovery algorithm Dec 16 12:57:06.156594 kernel: xor: automatically using best checksumming function avx Dec 16 12:57:06.156604 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 12:57:06.156615 kernel: BTRFS: device fsid ea73a94a-fb20-4d45-8448-4c6f4c422a4f devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (183) Dec 16 12:57:06.156625 kernel: BTRFS info (device dm-0): first mount of filesystem ea73a94a-fb20-4d45-8448-4c6f4c422a4f Dec 16 12:57:06.156634 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 12:57:06.156643 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 12:57:06.156652 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 12:57:06.156661 kernel: loop: module loaded Dec 16 12:57:06.156671 kernel: loop0: detected capacity change from 0 to 100136 Dec 16 12:57:06.156681 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 12:57:06.156692 systemd[1]: Successfully made /usr/ read-only. Dec 16 12:57:06.156706 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:57:06.156716 systemd[1]: Detected virtualization kvm. Dec 16 12:57:06.156726 systemd[1]: Detected architecture x86-64. Dec 16 12:57:06.156735 systemd[1]: Running in initrd. Dec 16 12:57:06.156746 systemd[1]: No hostname configured, using default hostname. Dec 16 12:57:06.156756 systemd[1]: Hostname set to . Dec 16 12:57:06.156766 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 12:57:06.156775 systemd[1]: Queued start job for default target initrd.target. Dec 16 12:57:06.156785 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:57:06.156794 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:57:06.156806 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:57:06.156816 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 12:57:06.156826 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:57:06.156836 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 12:57:06.156846 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 12:57:06.156856 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:57:06.156867 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:57:06.156877 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:57:06.156886 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:57:06.156896 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:57:06.156906 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:57:06.156915 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:57:06.156924 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:57:06.156936 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:57:06.156946 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:57:06.156955 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 12:57:06.156965 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 12:57:06.156975 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:57:06.156984 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:57:06.156994 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:57:06.157005 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:57:06.157015 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 12:57:06.157025 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 12:57:06.157035 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:57:06.157044 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 12:57:06.157055 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 12:57:06.157066 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 12:57:06.157076 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:57:06.157085 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:57:06.157096 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:57:06.157107 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 12:57:06.157117 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:57:06.157126 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 12:57:06.157136 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:57:06.157172 systemd-journald[318]: Collecting audit messages is enabled. Dec 16 12:57:06.157199 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 12:57:06.157209 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:57:06.157218 kernel: Bridge firewalling registered Dec 16 12:57:06.157228 systemd-journald[318]: Journal started Dec 16 12:57:06.157250 systemd-journald[318]: Runtime Journal (/run/log/journal/f8edfd9e1019434cb9873ea4c32f06f4) is 5.9M, max 47.8M, 41.8M free. Dec 16 12:57:06.162143 kernel: audit: type=1130 audit(1765889826.157:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:06.162175 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:57:06.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:06.168259 kernel: audit: type=1130 audit(1765889826.163:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:06.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:06.168752 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:57:06.172737 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:57:06.177917 systemd-modules-load[320]: Inserted module 'br_netfilter' Dec 16 12:57:06.180053 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:57:06.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:06.184680 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:57:06.186212 kernel: audit: type=1130 audit(1765889826.180:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:06.192712 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:57:06.201600 kernel: audit: type=1130 audit(1765889826.196:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:06.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:06.201025 systemd-tmpfiles[339]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 12:57:06.209916 kernel: audit: type=1130 audit(1765889826.201:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:06.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:06.201712 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:57:06.206995 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 12:57:06.226826 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:57:06.234439 kernel: audit: type=1130 audit(1765889826.226:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:06.242518 kernel: audit: type=1130 audit(1765889826.234:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:06.242552 kernel: audit: type=1334 audit(1765889826.236:9): prog-id=6 op=LOAD Dec 16 12:57:06.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:06.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:06.236000 audit: BPF prog-id=6 op=LOAD Dec 16 12:57:06.234472 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:57:06.240690 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:57:06.258745 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:57:06.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:06.264772 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 12:57:06.269748 kernel: audit: type=1130 audit(1765889826.262:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:06.295798 dracut-cmdline[363]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4dd8de2ff094d97322e7371b16ddee5fc8348868bcdd9ec7bcd11ea9d3933fee Dec 16 12:57:06.323247 systemd-resolved[352]: Positive Trust Anchors: Dec 16 12:57:06.323265 systemd-resolved[352]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:57:06.323271 systemd-resolved[352]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 12:57:06.323312 systemd-resolved[352]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:57:06.353660 systemd-resolved[352]: Defaulting to hostname 'linux'. Dec 16 12:57:06.355144 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:57:06.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:06.355972 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:57:06.436600 kernel: Loading iSCSI transport class v2.0-870. Dec 16 12:57:06.450598 kernel: iscsi: registered transport (tcp) Dec 16 12:57:06.474588 kernel: iscsi: registered transport (qla4xxx) Dec 16 12:57:06.474614 kernel: QLogic iSCSI HBA Driver Dec 16 12:57:06.500409 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:57:06.528763 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:57:06.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:06.530453 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:57:06.585042 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 12:57:06.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:06.587468 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 12:57:06.590448 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 12:57:06.627142 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:57:06.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:06.630000 audit: BPF prog-id=7 op=LOAD Dec 16 12:57:06.630000 audit: BPF prog-id=8 op=LOAD Dec 16 12:57:06.632186 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:57:06.661671 systemd-udevd[602]: Using default interface naming scheme 'v257'. Dec 16 12:57:06.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:06.675206 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:57:06.679605 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 12:57:06.708942 dracut-pre-trigger[668]: rd.md=0: removing MD RAID activation Dec 16 12:57:06.714005 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:57:06.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:06.716000 audit: BPF prog-id=9 op=LOAD Dec 16 12:57:06.717849 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:57:06.740509 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:57:06.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:06.742414 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:57:06.777642 systemd-networkd[716]: lo: Link UP Dec 16 12:57:06.777652 systemd-networkd[716]: lo: Gained carrier Dec 16 12:57:06.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:06.778358 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:57:06.779658 systemd[1]: Reached target network.target - Network. Dec 16 12:57:06.839077 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:57:06.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:06.843770 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 12:57:06.917793 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 16 12:57:06.931588 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 12:57:06.935700 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 16 12:57:06.949594 kernel: AES CTR mode by8 optimization enabled Dec 16 12:57:06.967477 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 16 12:57:06.969486 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 16 12:57:06.973850 systemd-networkd[716]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:57:06.973858 systemd-networkd[716]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:57:06.974274 systemd-networkd[716]: eth0: Link UP Dec 16 12:57:06.978364 systemd-networkd[716]: eth0: Gained carrier Dec 16 12:57:06.978390 systemd-networkd[716]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:57:06.988340 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 12:57:06.995148 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 12:57:06.997035 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:57:06.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:06.997253 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:57:06.997620 systemd-networkd[716]: eth0: DHCPv4 address 10.0.0.102/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 12:57:07.000013 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:57:07.010663 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:57:07.017319 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:57:07.017469 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:57:07.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:07.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:07.023599 disk-uuid[837]: Primary Header is updated. Dec 16 12:57:07.023599 disk-uuid[837]: Secondary Entries is updated. Dec 16 12:57:07.023599 disk-uuid[837]: Secondary Header is updated. Dec 16 12:57:07.021255 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:57:07.070452 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:57:07.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:07.112615 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 12:57:07.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:07.116292 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:57:07.116977 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:57:07.117535 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:57:07.125470 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 12:57:07.160020 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:57:07.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:08.068523 disk-uuid[839]: Warning: The kernel is still using the old partition table. Dec 16 12:57:08.068523 disk-uuid[839]: The new table will be used at the next reboot or after you Dec 16 12:57:08.068523 disk-uuid[839]: run partprobe(8) or kpartx(8) Dec 16 12:57:08.068523 disk-uuid[839]: The operation has completed successfully. Dec 16 12:57:08.084028 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 12:57:08.084180 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 12:57:08.097675 kernel: kauditd_printk_skb: 18 callbacks suppressed Dec 16 12:57:08.097709 kernel: audit: type=1130 audit(1765889828.083:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:08.097726 kernel: audit: type=1131 audit(1765889828.083:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:08.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:08.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:08.097911 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 12:57:08.139604 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (867) Dec 16 12:57:08.142845 kernel: BTRFS info (device vda6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 12:57:08.142868 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 12:57:08.146628 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:57:08.146650 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:57:08.154606 kernel: BTRFS info (device vda6): last unmount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 12:57:08.155816 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 12:57:08.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:08.160178 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 12:57:08.166991 kernel: audit: type=1130 audit(1765889828.158:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:08.308015 ignition[886]: Ignition 2.22.0 Dec 16 12:57:08.308034 ignition[886]: Stage: fetch-offline Dec 16 12:57:08.308345 ignition[886]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:57:08.308374 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:57:08.308490 ignition[886]: parsed url from cmdline: "" Dec 16 12:57:08.308495 ignition[886]: no config URL provided Dec 16 12:57:08.308502 ignition[886]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:57:08.308517 ignition[886]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:57:08.308588 ignition[886]: op(1): [started] loading QEMU firmware config module Dec 16 12:57:08.308594 ignition[886]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 16 12:57:08.319382 ignition[886]: op(1): [finished] loading QEMU firmware config module Dec 16 12:57:08.402375 ignition[886]: parsing config with SHA512: debd4f0a0d92c698fa1a2194ef6c3431f02104605fe2b8515c9fb4b53d72184c39f87a00d11b6c6a101c9b6b433a5bec062ccbb50af7137529e8029b82a17128 Dec 16 12:57:08.408593 unknown[886]: fetched base config from "system" Dec 16 12:57:08.408610 unknown[886]: fetched user config from "qemu" Dec 16 12:57:08.411511 ignition[886]: fetch-offline: fetch-offline passed Dec 16 12:57:08.411588 ignition[886]: Ignition finished successfully Dec 16 12:57:08.416339 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:57:08.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:08.420512 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 16 12:57:08.427360 kernel: audit: type=1130 audit(1765889828.420:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:08.421514 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 12:57:08.464005 ignition[897]: Ignition 2.22.0 Dec 16 12:57:08.464021 ignition[897]: Stage: kargs Dec 16 12:57:08.464184 ignition[897]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:57:08.464196 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:57:08.465184 ignition[897]: kargs: kargs passed Dec 16 12:57:08.470337 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 12:57:08.479462 kernel: audit: type=1130 audit(1765889828.471:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:08.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:08.465239 ignition[897]: Ignition finished successfully Dec 16 12:57:08.473457 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 12:57:08.510458 ignition[905]: Ignition 2.22.0 Dec 16 12:57:08.510471 ignition[905]: Stage: disks Dec 16 12:57:08.510622 ignition[905]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:57:08.510632 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:57:08.511266 ignition[905]: disks: disks passed Dec 16 12:57:08.511311 ignition[905]: Ignition finished successfully Dec 16 12:57:08.524047 kernel: audit: type=1130 audit(1765889828.518:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:08.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:08.516390 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 12:57:08.524155 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 12:57:08.525034 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 12:57:08.527999 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:57:08.528606 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:57:08.535885 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:57:08.540678 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 12:57:08.578717 systemd-networkd[716]: eth0: Gained IPv6LL Dec 16 12:57:08.586229 systemd-fsck[915]: ROOT: clean, 15/456736 files, 38230/456704 blocks Dec 16 12:57:08.915612 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 12:57:08.923608 kernel: audit: type=1130 audit(1765889828.915:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:08.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:08.923731 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 12:57:09.088595 kernel: EXT4-fs (vda9): mounted filesystem 7cac6192-738c-43cc-9341-24f71d091e91 r/w with ordered data mode. Quota mode: none. Dec 16 12:57:09.088906 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 12:57:09.090646 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 12:57:09.126622 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:57:09.130103 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 12:57:09.134109 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 12:57:09.134162 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 12:57:09.137146 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:57:09.148395 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 12:57:09.152771 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 12:57:09.160679 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (923) Dec 16 12:57:09.160710 kernel: BTRFS info (device vda6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 12:57:09.160722 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 12:57:09.163545 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:57:09.163578 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:57:09.164872 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:57:09.212343 initrd-setup-root[947]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 12:57:09.235469 initrd-setup-root[954]: cut: /sysroot/etc/group: No such file or directory Dec 16 12:57:09.240445 initrd-setup-root[961]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 12:57:09.245548 initrd-setup-root[968]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 12:57:09.341968 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 12:57:09.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:09.365103 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 12:57:09.372261 kernel: audit: type=1130 audit(1765889829.363:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:09.368985 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 12:57:09.391528 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 12:57:09.394210 kernel: BTRFS info (device vda6): last unmount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 12:57:09.412679 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 12:57:09.419549 kernel: audit: type=1130 audit(1765889829.412:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:09.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:09.437636 ignition[1038]: INFO : Ignition 2.22.0 Dec 16 12:57:09.437636 ignition[1038]: INFO : Stage: mount Dec 16 12:57:09.440518 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:57:09.440518 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:57:09.440518 ignition[1038]: INFO : mount: mount passed Dec 16 12:57:09.440518 ignition[1038]: INFO : Ignition finished successfully Dec 16 12:57:09.452745 kernel: audit: type=1130 audit(1765889829.445:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:09.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:09.445780 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 12:57:09.448637 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 12:57:09.473799 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:57:09.507101 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1049) Dec 16 12:57:09.507137 kernel: BTRFS info (device vda6): first mount of filesystem c87e2a2e-b8fc-4d1d-98f3-593ea9a0f098 Dec 16 12:57:09.507149 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 12:57:09.512426 kernel: BTRFS info (device vda6): turning on async discard Dec 16 12:57:09.512450 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 12:57:09.515420 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:57:09.548960 ignition[1066]: INFO : Ignition 2.22.0 Dec 16 12:57:09.548960 ignition[1066]: INFO : Stage: files Dec 16 12:57:09.551917 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:57:09.551917 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:57:09.551917 ignition[1066]: DEBUG : files: compiled without relabeling support, skipping Dec 16 12:57:09.551917 ignition[1066]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 12:57:09.551917 ignition[1066]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 12:57:09.565707 ignition[1066]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 12:57:09.568529 ignition[1066]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 12:57:09.568529 ignition[1066]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 12:57:09.568529 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 12:57:09.568529 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Dec 16 12:57:09.566274 unknown[1066]: wrote ssh authorized keys file for user: core Dec 16 12:57:09.604674 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 12:57:09.693900 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Dec 16 12:57:09.697171 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 12:57:09.697171 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 12:57:09.697171 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:57:09.697171 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:57:09.708594 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:57:09.708594 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:57:09.708594 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:57:09.708594 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:57:09.768730 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:57:09.772303 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:57:09.772303 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 12:57:09.780339 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 12:57:09.780339 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 12:57:09.780339 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Dec 16 12:57:10.242225 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 12:57:10.490502 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Dec 16 12:57:10.490502 ignition[1066]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 12:57:10.496953 ignition[1066]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:57:10.496953 ignition[1066]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:57:10.496953 ignition[1066]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 12:57:10.496953 ignition[1066]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 16 12:57:10.496953 ignition[1066]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 12:57:10.496953 ignition[1066]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 12:57:10.496953 ignition[1066]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 16 12:57:10.496953 ignition[1066]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 16 12:57:10.526400 ignition[1066]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 12:57:10.534498 ignition[1066]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 12:57:10.537244 ignition[1066]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 16 12:57:10.537244 ignition[1066]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 16 12:57:10.537244 ignition[1066]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 12:57:10.537244 ignition[1066]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:57:10.537244 ignition[1066]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:57:10.537244 ignition[1066]: INFO : files: files passed Dec 16 12:57:10.537244 ignition[1066]: INFO : Ignition finished successfully Dec 16 12:57:10.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:10.546090 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 12:57:10.550882 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 12:57:10.557693 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 12:57:10.574124 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 12:57:10.574265 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 12:57:10.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:10.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:10.582291 initrd-setup-root-after-ignition[1097]: grep: /sysroot/oem/oem-release: No such file or directory Dec 16 12:57:10.587752 initrd-setup-root-after-ignition[1099]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:57:10.587752 initrd-setup-root-after-ignition[1099]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:57:10.593239 initrd-setup-root-after-ignition[1103]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:57:10.597774 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:57:10.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:10.598690 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 12:57:10.605716 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 12:57:10.674482 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 12:57:10.674637 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 12:57:10.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:10.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:10.675737 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 12:57:10.680906 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 12:57:10.684822 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 12:57:10.685834 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 12:57:10.715676 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:57:10.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:10.719671 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 12:57:10.740041 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:57:10.740250 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:57:10.743164 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:57:10.746448 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 12:57:10.750045 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 12:57:10.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:10.750161 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:57:10.755616 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 12:57:10.756480 systemd[1]: Stopped target basic.target - Basic System. Dec 16 12:57:10.761121 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 12:57:10.766756 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:57:10.767481 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 12:57:10.771001 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:57:10.774592 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 12:57:10.777968 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:57:10.781104 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 12:57:10.785194 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 12:57:10.788005 systemd[1]: Stopped target swap.target - Swaps. Dec 16 12:57:10.791025 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 12:57:10.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:10.791140 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:57:10.796103 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:57:10.799412 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:57:10.800320 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 12:57:10.806171 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:57:10.807149 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 12:57:10.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:10.807293 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 12:57:10.813476 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 12:57:10.813618 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:57:10.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:10.816941 systemd[1]: Stopped target paths.target - Path Units. Dec 16 12:57:10.819810 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 12:57:10.824695 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:57:10.829111 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 12:57:10.830147 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 12:57:10.832655 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 12:57:10.832752 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:57:10.835690 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 12:57:10.835777 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:57:10.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:10.838495 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 16 12:57:10.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:10.838598 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:57:10.841552 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 12:57:10.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:10.841700 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:57:10.844618 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 12:57:10.844733 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 12:57:10.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:10.848998 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 12:57:10.851097 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 12:57:10.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:10.851235 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:57:10.852821 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 12:57:10.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:10.860447 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 12:57:10.860593 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:57:10.861451 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 12:57:10.861558 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:57:10.866004 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 12:57:10.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:10.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:10.866113 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:57:10.879931 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 12:57:10.880945 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 12:57:10.901137 ignition[1123]: INFO : Ignition 2.22.0 Dec 16 12:57:10.902933 ignition[1123]: INFO : Stage: umount Dec 16 12:57:10.902933 ignition[1123]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:57:10.902933 ignition[1123]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 12:57:10.907666 ignition[1123]: INFO : umount: umount passed Dec 16 12:57:10.907666 ignition[1123]: INFO : Ignition finished successfully Dec 16 12:57:10.910789 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 12:57:10.910935 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 12:57:10.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:10.914025 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 12:57:10.916328 systemd[1]: Stopped target network.target - Network. Dec 16 12:57:10.919127 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 12:57:10.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:10.919189 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 12:57:10.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:10.922116 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 12:57:10.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:10.922169 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 12:57:10.925120 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 12:57:10.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:10.925172 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 12:57:10.928299 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 12:57:10.928354 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 12:57:10.931521 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 12:57:10.932286 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 12:57:10.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:10.941396 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 12:57:10.941542 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 12:57:10.981000 audit: BPF prog-id=6 op=UNLOAD Dec 16 12:57:10.981952 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 12:57:10.982144 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 12:57:10.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:10.988376 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 12:57:10.989087 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 12:57:10.989132 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:57:10.997738 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 12:57:10.998430 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 12:57:10.998501 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:57:11.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:11.007068 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 12:57:11.007188 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:57:11.008431 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 12:57:11.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:11.008000 audit: BPF prog-id=9 op=UNLOAD Dec 16 12:57:11.008496 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 12:57:11.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:11.014026 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:57:11.020180 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 12:57:11.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:11.020302 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 12:57:11.026472 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 12:57:11.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:11.026543 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 12:57:11.039489 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 12:57:11.046755 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:57:11.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:11.047733 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 12:57:11.047780 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 12:57:11.051347 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 12:57:11.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:11.051387 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:57:11.051869 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 12:57:11.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:11.051920 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:57:11.060283 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 12:57:11.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:11.060336 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 12:57:11.083878 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 12:57:11.083932 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:57:11.089504 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 12:57:11.091045 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 12:57:11.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:11.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:11.091097 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:57:11.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:11.093969 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 12:57:11.094027 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:57:11.094509 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:57:11.094579 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:57:11.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:11.103506 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 12:57:11.129736 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 12:57:11.137992 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 12:57:11.138130 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 12:57:11.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:11.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:11.141874 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 12:57:11.145943 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 12:57:11.178023 systemd[1]: Switching root. Dec 16 12:57:11.224527 systemd-journald[318]: Journal stopped Dec 16 12:57:14.117735 systemd-journald[318]: Received SIGTERM from PID 1 (systemd). Dec 16 12:57:14.117802 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 12:57:14.117817 kernel: SELinux: policy capability open_perms=1 Dec 16 12:57:14.117829 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 12:57:14.117846 kernel: SELinux: policy capability always_check_network=0 Dec 16 12:57:14.117858 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 12:57:14.117870 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 12:57:14.117882 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 12:57:14.117896 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 12:57:14.117909 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 12:57:14.117922 systemd[1]: Successfully loaded SELinux policy in 64.958ms. Dec 16 12:57:14.117944 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.742ms. Dec 16 12:57:14.117958 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:57:14.117974 systemd[1]: Detected virtualization kvm. Dec 16 12:57:14.117987 systemd[1]: Detected architecture x86-64. Dec 16 12:57:14.118001 systemd[1]: Detected first boot. Dec 16 12:57:14.118018 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 12:57:14.118031 kernel: kauditd_printk_skb: 45 callbacks suppressed Dec 16 12:57:14.118047 kernel: audit: type=1334 audit(1765889833.276:84): prog-id=10 op=LOAD Dec 16 12:57:14.118064 kernel: audit: type=1334 audit(1765889833.276:85): prog-id=10 op=UNLOAD Dec 16 12:57:14.118080 kernel: audit: type=1334 audit(1765889833.276:86): prog-id=11 op=LOAD Dec 16 12:57:14.118092 kernel: audit: type=1334 audit(1765889833.276:87): prog-id=11 op=UNLOAD Dec 16 12:57:14.118107 zram_generator::config[1168]: No configuration found. Dec 16 12:57:14.118121 kernel: Guest personality initialized and is inactive Dec 16 12:57:14.118133 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 16 12:57:14.118145 kernel: Initialized host personality Dec 16 12:57:14.118157 kernel: NET: Registered PF_VSOCK protocol family Dec 16 12:57:14.118169 systemd[1]: Populated /etc with preset unit settings. Dec 16 12:57:14.118182 kernel: audit: type=1334 audit(1765889833.800:88): prog-id=12 op=LOAD Dec 16 12:57:14.118209 kernel: audit: type=1334 audit(1765889833.800:89): prog-id=3 op=UNLOAD Dec 16 12:57:14.118221 kernel: audit: type=1334 audit(1765889833.800:90): prog-id=13 op=LOAD Dec 16 12:57:14.118234 kernel: audit: type=1334 audit(1765889833.800:91): prog-id=14 op=LOAD Dec 16 12:57:14.118246 kernel: audit: type=1334 audit(1765889833.800:92): prog-id=4 op=UNLOAD Dec 16 12:57:14.118259 kernel: audit: type=1334 audit(1765889833.800:93): prog-id=5 op=UNLOAD Dec 16 12:57:14.118277 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 12:57:14.118290 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 12:57:14.118306 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 12:57:14.118323 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 12:57:14.118337 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 12:57:14.118350 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 12:57:14.118362 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 12:57:14.118376 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 12:57:14.118391 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 12:57:14.118404 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 12:57:14.118416 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 12:57:14.118429 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:57:14.118442 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:57:14.118456 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 12:57:14.118468 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 12:57:14.118484 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 12:57:14.118497 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:57:14.118510 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 12:57:14.118522 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:57:14.118535 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:57:14.118548 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 12:57:14.118574 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 12:57:14.118590 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 12:57:14.118603 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 12:57:14.118615 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:57:14.118628 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:57:14.118641 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 16 12:57:14.118654 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:57:14.118667 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:57:14.118682 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 12:57:14.118695 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 12:57:14.118708 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 12:57:14.118905 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 12:57:14.118923 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 16 12:57:14.118936 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:57:14.118949 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 16 12:57:14.118962 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 16 12:57:14.118975 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:57:14.118988 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:57:14.119001 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 12:57:14.119016 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 12:57:14.119029 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 12:57:14.119044 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 12:57:14.119057 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:57:14.119074 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 12:57:14.119087 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 12:57:14.119100 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 12:57:14.119115 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 12:57:14.119128 systemd[1]: Reached target machines.target - Containers. Dec 16 12:57:14.119142 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 12:57:14.119155 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:57:14.119168 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:57:14.119182 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 12:57:14.119206 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:57:14.119219 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:57:14.119231 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:57:14.119245 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 12:57:14.119258 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:57:14.119272 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 12:57:14.119285 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 12:57:14.119300 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 12:57:14.119313 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 12:57:14.119326 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 12:57:14.119342 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:57:14.119357 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:57:14.119369 kernel: fuse: init (API version 7.41) Dec 16 12:57:14.119382 kernel: ACPI: bus type drm_connector registered Dec 16 12:57:14.119394 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:57:14.119407 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:57:14.119420 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 12:57:14.119433 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 12:57:14.119468 systemd-journald[1247]: Collecting audit messages is enabled. Dec 16 12:57:14.119493 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:57:14.119506 systemd-journald[1247]: Journal started Dec 16 12:57:14.119529 systemd-journald[1247]: Runtime Journal (/run/log/journal/f8edfd9e1019434cb9873ea4c32f06f4) is 5.9M, max 47.8M, 41.8M free. Dec 16 12:57:13.926000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 16 12:57:14.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:14.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:14.064000 audit: BPF prog-id=14 op=UNLOAD Dec 16 12:57:14.064000 audit: BPF prog-id=13 op=UNLOAD Dec 16 12:57:14.065000 audit: BPF prog-id=15 op=LOAD Dec 16 12:57:14.065000 audit: BPF prog-id=16 op=LOAD Dec 16 12:57:14.065000 audit: BPF prog-id=17 op=LOAD Dec 16 12:57:14.115000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 12:57:14.115000 audit[1247]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffc5025da90 a2=4000 a3=0 items=0 ppid=1 pid=1247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:14.115000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 16 12:57:13.789290 systemd[1]: Queued start job for default target multi-user.target. Dec 16 12:57:13.801712 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 16 12:57:13.802287 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 12:57:14.126652 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:57:14.131876 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:57:14.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:14.133176 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 12:57:14.135178 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 12:57:14.137245 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 12:57:14.139069 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 12:57:14.141119 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 12:57:14.143217 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 12:57:14.145256 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 12:57:14.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:14.147695 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:57:14.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:14.150107 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 12:57:14.150339 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 12:57:14.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:14.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:14.152660 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:57:14.152877 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:57:14.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:14.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:14.155185 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:57:14.155407 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:57:14.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:14.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:14.157557 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:57:14.157787 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:57:14.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:14.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:14.160244 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 12:57:14.160453 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 12:57:14.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:14.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:14.162725 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:57:14.162933 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:57:14.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:14.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:14.165274 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:57:14.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:14.167964 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:57:14.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:14.171592 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 12:57:14.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:14.174208 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 12:57:14.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:14.190439 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:57:14.192926 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 16 12:57:14.196255 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 12:57:14.199170 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 12:57:14.201300 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 12:57:14.201392 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:57:14.204235 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 12:57:14.206954 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:57:14.207086 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:57:14.209758 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 12:57:14.217179 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 12:57:14.224559 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:57:14.225819 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 12:57:14.227902 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:57:14.228620 systemd-journald[1247]: Time spent on flushing to /var/log/journal/f8edfd9e1019434cb9873ea4c32f06f4 is 15.263ms for 1170 entries. Dec 16 12:57:14.228620 systemd-journald[1247]: System Journal (/var/log/journal/f8edfd9e1019434cb9873ea4c32f06f4) is 8M, max 163.5M, 155.5M free. Dec 16 12:57:14.942713 systemd-journald[1247]: Received client request to flush runtime journal. Dec 16 12:57:14.942797 kernel: loop1: detected capacity change from 0 to 111544 Dec 16 12:57:14.942837 kernel: loop2: detected capacity change from 0 to 229808 Dec 16 12:57:14.942858 kernel: loop3: detected capacity change from 0 to 119256 Dec 16 12:57:14.942877 kernel: loop4: detected capacity change from 0 to 111544 Dec 16 12:57:14.942896 kernel: loop5: detected capacity change from 0 to 229808 Dec 16 12:57:14.942917 kernel: loop6: detected capacity change from 0 to 119256 Dec 16 12:57:14.942936 zram_generator::config[1324]: No configuration found. Dec 16 12:57:14.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:14.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:14.230838 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:57:14.235690 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 12:57:14.238578 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 12:57:14.242465 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:57:14.245884 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 12:57:14.247938 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 12:57:14.435275 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:57:14.578354 (sd-merge)[1299]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Dec 16 12:57:14.582207 (sd-merge)[1299]: Merged extensions into '/usr'. Dec 16 12:57:14.587061 systemd[1]: Reload requested from client PID 1288 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 12:57:14.587075 systemd[1]: Reloading... Dec 16 12:57:14.934933 systemd[1]: Reloading finished in 347 ms. Dec 16 12:57:14.970978 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 12:57:14.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:14.973508 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 12:57:14.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:14.975774 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 12:57:14.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:14.978281 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 12:57:14.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:14.986813 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 12:57:15.000274 systemd[1]: Starting ensure-sysext.service... Dec 16 12:57:15.002785 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 12:57:15.005000 audit: BPF prog-id=18 op=LOAD Dec 16 12:57:15.005000 audit: BPF prog-id=19 op=LOAD Dec 16 12:57:15.005000 audit: BPF prog-id=20 op=LOAD Dec 16 12:57:15.006884 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 16 12:57:15.009000 audit: BPF prog-id=21 op=LOAD Dec 16 12:57:15.010922 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:57:15.013687 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:57:15.017728 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:57:15.024000 audit: BPF prog-id=22 op=LOAD Dec 16 12:57:15.024000 audit: BPF prog-id=15 op=UNLOAD Dec 16 12:57:15.025000 audit: BPF prog-id=23 op=LOAD Dec 16 12:57:15.025000 audit: BPF prog-id=24 op=LOAD Dec 16 12:57:15.025000 audit: BPF prog-id=16 op=UNLOAD Dec 16 12:57:15.025000 audit: BPF prog-id=17 op=UNLOAD Dec 16 12:57:15.036000 audit: BPF prog-id=25 op=LOAD Dec 16 12:57:15.036000 audit: BPF prog-id=26 op=LOAD Dec 16 12:57:15.036000 audit: BPF prog-id=27 op=LOAD Dec 16 12:57:15.038778 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 16 12:57:15.041000 audit: BPF prog-id=28 op=LOAD Dec 16 12:57:15.041000 audit: BPF prog-id=29 op=LOAD Dec 16 12:57:15.041000 audit: BPF prog-id=30 op=LOAD Dec 16 12:57:15.042848 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 12:57:15.045736 systemd[1]: Reload requested from client PID 1367 ('systemctl') (unit ensure-sysext.service)... Dec 16 12:57:15.045750 systemd[1]: Reloading... Dec 16 12:57:15.045854 systemd-tmpfiles[1372]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 12:57:15.045884 systemd-tmpfiles[1372]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 12:57:15.046133 systemd-tmpfiles[1372]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 12:57:15.048405 systemd-tmpfiles[1371]: ACLs are not supported, ignoring. Dec 16 12:57:15.048422 systemd-tmpfiles[1371]: ACLs are not supported, ignoring. Dec 16 12:57:15.048517 systemd-tmpfiles[1372]: ACLs are not supported, ignoring. Dec 16 12:57:15.048613 systemd-tmpfiles[1372]: ACLs are not supported, ignoring. Dec 16 12:57:15.090130 systemd-nsresourced[1375]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 16 12:57:15.118613 zram_generator::config[1420]: No configuration found. Dec 16 12:57:15.412516 systemd-tmpfiles[1372]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:57:15.412535 systemd-tmpfiles[1372]: Skipping /boot Dec 16 12:57:15.427227 systemd-tmpfiles[1372]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:57:15.427245 systemd-tmpfiles[1372]: Skipping /boot Dec 16 12:57:15.513063 systemd[1]: Reloading finished in 466 ms. Dec 16 12:57:15.536561 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 12:57:15.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:15.555720 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 16 12:57:15.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:15.559604 systemd-oomd[1369]: No swap; memory pressure usage will be degraded Dec 16 12:57:15.562000 audit: BPF prog-id=31 op=LOAD Dec 16 12:57:15.562000 audit: BPF prog-id=25 op=UNLOAD Dec 16 12:57:15.562000 audit: BPF prog-id=32 op=LOAD Dec 16 12:57:15.562000 audit: BPF prog-id=33 op=LOAD Dec 16 12:57:15.562000 audit: BPF prog-id=26 op=UNLOAD Dec 16 12:57:15.562000 audit: BPF prog-id=27 op=UNLOAD Dec 16 12:57:15.563000 audit: BPF prog-id=34 op=LOAD Dec 16 12:57:15.563000 audit: BPF prog-id=18 op=UNLOAD Dec 16 12:57:15.563000 audit: BPF prog-id=35 op=LOAD Dec 16 12:57:15.563000 audit: BPF prog-id=36 op=LOAD Dec 16 12:57:15.563000 audit: BPF prog-id=19 op=UNLOAD Dec 16 12:57:15.563000 audit: BPF prog-id=20 op=UNLOAD Dec 16 12:57:15.564000 audit: BPF prog-id=37 op=LOAD Dec 16 12:57:15.564000 audit: BPF prog-id=28 op=UNLOAD Dec 16 12:57:15.565000 audit: BPF prog-id=38 op=LOAD Dec 16 12:57:15.565000 audit: BPF prog-id=39 op=LOAD Dec 16 12:57:15.565000 audit: BPF prog-id=29 op=UNLOAD Dec 16 12:57:15.565000 audit: BPF prog-id=30 op=UNLOAD Dec 16 12:57:15.566000 audit: BPF prog-id=40 op=LOAD Dec 16 12:57:15.571000 audit: BPF prog-id=22 op=UNLOAD Dec 16 12:57:15.571000 audit: BPF prog-id=41 op=LOAD Dec 16 12:57:15.571000 audit: BPF prog-id=42 op=LOAD Dec 16 12:57:15.571000 audit: BPF prog-id=23 op=UNLOAD Dec 16 12:57:15.571000 audit: BPF prog-id=24 op=UNLOAD Dec 16 12:57:15.571000 audit: BPF prog-id=43 op=LOAD Dec 16 12:57:15.572000 audit: BPF prog-id=21 op=UNLOAD Dec 16 12:57:15.573060 systemd-resolved[1370]: Positive Trust Anchors: Dec 16 12:57:15.573072 systemd-resolved[1370]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:57:15.573077 systemd-resolved[1370]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 12:57:15.573108 systemd-resolved[1370]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:57:15.576785 systemd-resolved[1370]: Defaulting to hostname 'linux'. Dec 16 12:57:15.576792 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 16 12:57:15.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:15.592753 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:57:15.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:15.595047 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:57:15.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:15.633789 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:57:15.639634 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:57:15.639976 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:57:15.641672 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:57:15.644707 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:57:15.649507 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:57:15.651725 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:57:15.652049 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:57:15.652308 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:57:15.652558 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:57:15.657407 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:57:15.657654 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:57:15.657897 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:57:15.658106 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:57:15.658248 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:57:15.658386 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:57:15.661712 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:57:15.661996 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:57:15.664023 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:57:15.665949 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:57:15.666180 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 12:57:15.666325 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:57:15.666521 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 12:57:15.668181 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:57:15.668497 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:57:15.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:15.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:15.670925 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:57:15.671161 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:57:15.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:15.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:15.674091 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:57:15.674356 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:57:15.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:15.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:15.676927 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:57:15.677185 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:57:15.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:15.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:15.682398 systemd[1]: Finished ensure-sysext.service. Dec 16 12:57:15.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:15.687545 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:57:15.687625 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:57:15.688000 audit: BPF prog-id=44 op=LOAD Dec 16 12:57:15.689640 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 12:57:15.790236 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 12:57:15.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:15.792592 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 12:57:15.888536 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:57:15.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:15.895345 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:57:16.042380 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 12:57:16.045686 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 12:57:16.053829 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 12:57:16.057884 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 12:57:16.073000 audit[1483]: SYSTEM_BOOT pid=1483 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 16 12:57:16.080706 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 12:57:16.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:16.098557 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 12:57:16.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:16.118776 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 12:57:16.121106 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 12:57:16.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:16.940000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 12:57:16.940000 audit[1498]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff5f5c46e0 a2=420 a3=0 items=0 ppid=1471 pid=1498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:16.940000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:57:16.941746 augenrules[1498]: No rules Dec 16 12:57:16.944235 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:57:16.944997 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:57:17.159662 ldconfig[1474]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 12:57:17.549637 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 12:57:17.595984 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:57:17.641198 systemd-udevd[1505]: Using default interface naming scheme 'v257'. Dec 16 12:57:17.680789 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:57:17.685965 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:57:17.735718 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 12:57:17.829629 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 16 12:57:17.832584 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 12:57:17.837607 kernel: ACPI: button: Power Button [PWRF] Dec 16 12:57:17.850940 systemd-networkd[1515]: lo: Link UP Dec 16 12:57:17.850953 systemd-networkd[1515]: lo: Gained carrier Dec 16 12:57:17.851972 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:57:17.853873 systemd[1]: Reached target network.target - Network. Dec 16 12:57:17.858770 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 12:57:17.862991 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 12:57:17.869321 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Dec 16 12:57:17.869699 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 16 12:57:17.873354 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 16 12:57:17.888686 systemd-networkd[1515]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:57:17.888814 systemd-networkd[1515]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:57:17.889487 systemd-networkd[1515]: eth0: Link UP Dec 16 12:57:17.889734 systemd-networkd[1515]: eth0: Gained carrier Dec 16 12:57:17.889878 systemd-networkd[1515]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 12:57:17.901663 systemd-networkd[1515]: eth0: DHCPv4 address 10.0.0.102/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 12:57:17.903259 systemd-timesyncd[1466]: Network configuration changed, trying to establish connection. Dec 16 12:57:18.663605 systemd-resolved[1370]: Clock change detected. Flushing caches. Dec 16 12:57:18.663875 systemd-timesyncd[1466]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 16 12:57:18.664001 systemd-timesyncd[1466]: Initial clock synchronization to Tue 2025-12-16 12:57:18.663565 UTC. Dec 16 12:57:18.736361 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:57:18.756510 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 12:57:18.762711 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 12:57:18.765880 kernel: kvm_amd: TSC scaling supported Dec 16 12:57:18.765923 kernel: kvm_amd: Nested Virtualization enabled Dec 16 12:57:18.765949 kernel: kvm_amd: Nested Paging enabled Dec 16 12:57:18.765965 kernel: kvm_amd: LBR virtualization supported Dec 16 12:57:18.765981 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 16 12:57:18.765999 kernel: kvm_amd: Virtual GIF supported Dec 16 12:57:18.789941 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 12:57:18.799042 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 12:57:18.809083 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 12:57:18.829427 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 12:57:18.836893 kernel: EDAC MC: Ver: 3.0.0 Dec 16 12:57:18.845168 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 12:57:18.870791 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:57:18.875032 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:57:18.876908 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 12:57:18.878955 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 12:57:18.881069 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 12:57:18.883124 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 12:57:18.884984 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 12:57:18.887030 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 16 12:57:18.889162 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 16 12:57:18.891010 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 12:57:18.893075 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 12:57:18.893108 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:57:18.894606 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:57:18.897150 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 12:57:18.900627 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 12:57:18.904245 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 12:57:18.906730 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 12:57:18.909055 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 12:57:18.916901 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 12:57:18.919156 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 12:57:18.921946 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 12:57:18.924671 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:57:18.926498 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:57:18.928282 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:57:18.928311 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:57:18.929415 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 12:57:18.932374 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 12:57:18.936080 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 12:57:18.944989 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 12:57:18.948501 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 12:57:18.950543 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 12:57:18.951908 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 12:57:19.005192 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 12:57:19.007923 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 12:57:19.010602 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 12:57:19.014291 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 12:57:19.020138 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 12:57:19.021848 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 12:57:19.022431 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 12:57:19.023158 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 12:57:19.025881 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 12:57:19.034392 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 12:57:19.036856 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 12:57:19.037749 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 12:57:19.041473 jq[1575]: false Dec 16 12:57:19.042176 jq[1586]: true Dec 16 12:57:19.042283 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 12:57:19.042560 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 12:57:19.048132 google_oslogin_nss_cache[1577]: oslogin_cache_refresh[1577]: Refreshing passwd entry cache Dec 16 12:57:19.048146 oslogin_cache_refresh[1577]: Refreshing passwd entry cache Dec 16 12:57:19.048397 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 12:57:19.049951 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 12:57:19.060054 update_engine[1585]: I20251216 12:57:19.059675 1585 main.cc:92] Flatcar Update Engine starting Dec 16 12:57:19.126678 jq[1603]: true Dec 16 12:57:19.127165 google_oslogin_nss_cache[1577]: oslogin_cache_refresh[1577]: Failure getting users, quitting Dec 16 12:57:19.127165 google_oslogin_nss_cache[1577]: oslogin_cache_refresh[1577]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 12:57:19.127130 oslogin_cache_refresh[1577]: Failure getting users, quitting Dec 16 12:57:19.127151 oslogin_cache_refresh[1577]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 12:57:19.130775 extend-filesystems[1576]: Found /dev/vda6 Dec 16 12:57:19.139926 tar[1590]: linux-amd64/LICENSE Dec 16 12:57:19.140175 tar[1590]: linux-amd64/helm Dec 16 12:57:19.167365 dbus-daemon[1573]: [system] SELinux support is enabled Dec 16 12:57:19.175376 update_engine[1585]: I20251216 12:57:19.175181 1585 update_check_scheduler.cc:74] Next update check in 7m17s Dec 16 12:57:19.178326 systemd-logind[1584]: Watching system buttons on /dev/input/event2 (Power Button) Dec 16 12:57:19.178363 systemd-logind[1584]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 12:57:19.179048 systemd-logind[1584]: New seat seat0. Dec 16 12:57:19.185162 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 12:57:19.188461 extend-filesystems[1576]: Found /dev/vda9 Dec 16 12:57:19.191980 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 12:57:19.192594 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 12:57:19.194737 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 12:57:19.198155 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 12:57:19.198215 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 12:57:19.200307 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 12:57:19.200330 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 12:57:19.200350 dbus-daemon[1573]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 12:57:19.202516 systemd[1]: Started update-engine.service - Update Engine. Dec 16 12:57:19.206148 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 12:57:19.332171 locksmithd[1634]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 12:57:19.358245 extend-filesystems[1576]: Checking size of /dev/vda9 Dec 16 12:57:19.383935 google_oslogin_nss_cache[1577]: oslogin_cache_refresh[1577]: Refreshing group entry cache Dec 16 12:57:19.384057 oslogin_cache_refresh[1577]: Refreshing group entry cache Dec 16 12:57:19.391792 google_oslogin_nss_cache[1577]: oslogin_cache_refresh[1577]: Failure getting groups, quitting Dec 16 12:57:19.391885 oslogin_cache_refresh[1577]: Failure getting groups, quitting Dec 16 12:57:19.391943 google_oslogin_nss_cache[1577]: oslogin_cache_refresh[1577]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 12:57:19.391980 oslogin_cache_refresh[1577]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 12:57:19.394258 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 12:57:19.394659 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 12:57:19.433226 tar[1590]: linux-amd64/README.md Dec 16 12:57:19.455805 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 12:57:19.461160 sshd_keygen[1598]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 12:57:19.485248 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 12:57:19.487898 containerd[1605]: time="2025-12-16T12:57:19Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 12:57:19.491568 containerd[1605]: time="2025-12-16T12:57:19.489087156Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 16 12:57:19.489658 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 12:57:19.498139 containerd[1605]: time="2025-12-16T12:57:19.498090410Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.756µs" Dec 16 12:57:19.498139 containerd[1605]: time="2025-12-16T12:57:19.498123181Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 12:57:19.498192 containerd[1605]: time="2025-12-16T12:57:19.498168186Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 12:57:19.498192 containerd[1605]: time="2025-12-16T12:57:19.498179337Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 12:57:19.498392 containerd[1605]: time="2025-12-16T12:57:19.498361609Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 12:57:19.498392 containerd[1605]: time="2025-12-16T12:57:19.498380033Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:57:19.498473 containerd[1605]: time="2025-12-16T12:57:19.498444624Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:57:19.498473 containerd[1605]: time="2025-12-16T12:57:19.498459472Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:57:19.499122 containerd[1605]: time="2025-12-16T12:57:19.498935595Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:57:19.499122 containerd[1605]: time="2025-12-16T12:57:19.498962285Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:57:19.499122 containerd[1605]: time="2025-12-16T12:57:19.498978325Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:57:19.499122 containerd[1605]: time="2025-12-16T12:57:19.498988474Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 12:57:19.499254 containerd[1605]: time="2025-12-16T12:57:19.499224877Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 12:57:19.499280 containerd[1605]: time="2025-12-16T12:57:19.499252830Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 12:57:19.499494 containerd[1605]: time="2025-12-16T12:57:19.499464617Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 12:57:19.499773 containerd[1605]: time="2025-12-16T12:57:19.499747167Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:57:19.499803 containerd[1605]: time="2025-12-16T12:57:19.499787623Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:57:19.499803 containerd[1605]: time="2025-12-16T12:57:19.499797642Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 12:57:19.499865 containerd[1605]: time="2025-12-16T12:57:19.499844650Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 12:57:19.500021 containerd[1605]: time="2025-12-16T12:57:19.499998839Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 12:57:19.500082 containerd[1605]: time="2025-12-16T12:57:19.500063330Z" level=info msg="metadata content store policy set" policy=shared Dec 16 12:57:19.515187 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 12:57:19.515508 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 12:57:19.519217 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 12:57:19.541710 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 12:57:19.545204 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 12:57:19.547924 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 12:57:19.549907 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 12:57:19.613794 extend-filesystems[1576]: Resized partition /dev/vda9 Dec 16 12:57:19.616905 extend-filesystems[1670]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 12:57:19.839882 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Dec 16 12:57:20.315857 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Dec 16 12:57:20.572894 extend-filesystems[1670]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 16 12:57:20.572894 extend-filesystems[1670]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 16 12:57:20.572894 extend-filesystems[1670]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Dec 16 12:57:20.580723 extend-filesystems[1576]: Resized filesystem in /dev/vda9 Dec 16 12:57:20.574138 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 12:57:20.582901 containerd[1605]: time="2025-12-16T12:57:20.581477878Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 12:57:20.582901 containerd[1605]: time="2025-12-16T12:57:20.581547128Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 12:57:20.582901 containerd[1605]: time="2025-12-16T12:57:20.581642016Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 12:57:20.582901 containerd[1605]: time="2025-12-16T12:57:20.581662064Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 12:57:20.582901 containerd[1605]: time="2025-12-16T12:57:20.581681139Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 12:57:20.582901 containerd[1605]: time="2025-12-16T12:57:20.581692150Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 12:57:20.582901 containerd[1605]: time="2025-12-16T12:57:20.581703722Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 12:57:20.582901 containerd[1605]: time="2025-12-16T12:57:20.581713290Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 12:57:20.582901 containerd[1605]: time="2025-12-16T12:57:20.581724861Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 12:57:20.582901 containerd[1605]: time="2025-12-16T12:57:20.581736122Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 12:57:20.582901 containerd[1605]: time="2025-12-16T12:57:20.581748736Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 12:57:20.582901 containerd[1605]: time="2025-12-16T12:57:20.581762662Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 12:57:20.582901 containerd[1605]: time="2025-12-16T12:57:20.581774384Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 12:57:20.582901 containerd[1605]: time="2025-12-16T12:57:20.581788551Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 12:57:20.583579 bash[1631]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:57:20.574482 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 12:57:20.583793 containerd[1605]: time="2025-12-16T12:57:20.582071421Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 12:57:20.583793 containerd[1605]: time="2025-12-16T12:57:20.582093222Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 12:57:20.583793 containerd[1605]: time="2025-12-16T12:57:20.582105625Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 12:57:20.583793 containerd[1605]: time="2025-12-16T12:57:20.582119171Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 12:57:20.583793 containerd[1605]: time="2025-12-16T12:57:20.582129670Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 12:57:20.583793 containerd[1605]: time="2025-12-16T12:57:20.582146161Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 12:57:20.583793 containerd[1605]: time="2025-12-16T12:57:20.582156902Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 12:57:20.583793 containerd[1605]: time="2025-12-16T12:57:20.582166550Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 12:57:20.583793 containerd[1605]: time="2025-12-16T12:57:20.582177580Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 12:57:20.583793 containerd[1605]: time="2025-12-16T12:57:20.582187669Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 12:57:20.583793 containerd[1605]: time="2025-12-16T12:57:20.582206154Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 12:57:20.583793 containerd[1605]: time="2025-12-16T12:57:20.582229177Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 12:57:20.583793 containerd[1605]: time="2025-12-16T12:57:20.582280473Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 12:57:20.583793 containerd[1605]: time="2025-12-16T12:57:20.582316210Z" level=info msg="Start snapshots syncer" Dec 16 12:57:20.583793 containerd[1605]: time="2025-12-16T12:57:20.582354332Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 12:57:20.584192 containerd[1605]: time="2025-12-16T12:57:20.582650587Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 12:57:20.584192 containerd[1605]: time="2025-12-16T12:57:20.582733333Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 12:57:20.584332 containerd[1605]: time="2025-12-16T12:57:20.582799236Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 12:57:20.584332 containerd[1605]: time="2025-12-16T12:57:20.582936624Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 12:57:20.584332 containerd[1605]: time="2025-12-16T12:57:20.582959507Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 12:57:20.584332 containerd[1605]: time="2025-12-16T12:57:20.582970077Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 12:57:20.584332 containerd[1605]: time="2025-12-16T12:57:20.582981368Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 12:57:20.584332 containerd[1605]: time="2025-12-16T12:57:20.582992829Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 12:57:20.584332 containerd[1605]: time="2025-12-16T12:57:20.583008048Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 12:57:20.584332 containerd[1605]: time="2025-12-16T12:57:20.583018387Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 12:57:20.584332 containerd[1605]: time="2025-12-16T12:57:20.583028396Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 12:57:20.584332 containerd[1605]: time="2025-12-16T12:57:20.583039377Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 12:57:20.584332 containerd[1605]: time="2025-12-16T12:57:20.583075484Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:57:20.584332 containerd[1605]: time="2025-12-16T12:57:20.583088198Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:57:20.584332 containerd[1605]: time="2025-12-16T12:57:20.583097055Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:57:20.584599 containerd[1605]: time="2025-12-16T12:57:20.583106733Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:57:20.584599 containerd[1605]: time="2025-12-16T12:57:20.583116211Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 12:57:20.584599 containerd[1605]: time="2025-12-16T12:57:20.583127872Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 12:57:20.584599 containerd[1605]: time="2025-12-16T12:57:20.583139705Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 12:57:20.584599 containerd[1605]: time="2025-12-16T12:57:20.583152619Z" level=info msg="runtime interface created" Dec 16 12:57:20.584599 containerd[1605]: time="2025-12-16T12:57:20.583158810Z" level=info msg="created NRI interface" Dec 16 12:57:20.584599 containerd[1605]: time="2025-12-16T12:57:20.583166866Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 12:57:20.584599 containerd[1605]: time="2025-12-16T12:57:20.583177486Z" level=info msg="Connect containerd service" Dec 16 12:57:20.584599 containerd[1605]: time="2025-12-16T12:57:20.583195429Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 12:57:20.584599 containerd[1605]: time="2025-12-16T12:57:20.583906853Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:57:20.585489 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 12:57:20.588846 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 12:57:20.602008 systemd-networkd[1515]: eth0: Gained IPv6LL Dec 16 12:57:20.605793 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 12:57:20.609009 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 12:57:20.614079 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 16 12:57:20.619619 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:57:20.633104 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 12:57:20.666293 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 12:57:20.669278 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 16 12:57:20.669666 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 16 12:57:20.673003 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 12:57:20.694432 containerd[1605]: time="2025-12-16T12:57:20.693763764Z" level=info msg="Start subscribing containerd event" Dec 16 12:57:20.694432 containerd[1605]: time="2025-12-16T12:57:20.693820090Z" level=info msg="Start recovering state" Dec 16 12:57:20.694432 containerd[1605]: time="2025-12-16T12:57:20.693965432Z" level=info msg="Start event monitor" Dec 16 12:57:20.694432 containerd[1605]: time="2025-12-16T12:57:20.693980130Z" level=info msg="Start cni network conf syncer for default" Dec 16 12:57:20.694432 containerd[1605]: time="2025-12-16T12:57:20.693995068Z" level=info msg="Start streaming server" Dec 16 12:57:20.694432 containerd[1605]: time="2025-12-16T12:57:20.694005217Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 12:57:20.694432 containerd[1605]: time="2025-12-16T12:57:20.694013613Z" level=info msg="runtime interface starting up..." Dec 16 12:57:20.694432 containerd[1605]: time="2025-12-16T12:57:20.694017039Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 12:57:20.694432 containerd[1605]: time="2025-12-16T12:57:20.694066682Z" level=info msg="starting plugins..." Dec 16 12:57:20.694432 containerd[1605]: time="2025-12-16T12:57:20.694084826Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 12:57:20.694432 containerd[1605]: time="2025-12-16T12:57:20.694094825Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 12:57:20.694432 containerd[1605]: time="2025-12-16T12:57:20.694236140Z" level=info msg="containerd successfully booted in 1.207198s" Dec 16 12:57:20.694564 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 12:57:21.406447 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:57:21.408942 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 12:57:21.411054 (kubelet)[1714]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:57:21.411092 systemd[1]: Startup finished in 3.280s (kernel) + 7.183s (initrd) + 7.768s (userspace) = 18.232s. Dec 16 12:57:21.857078 kubelet[1714]: E1216 12:57:21.857005 1714 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:57:21.861376 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:57:21.861575 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:57:21.862061 systemd[1]: kubelet.service: Consumed 1.042s CPU time, 266.1M memory peak. Dec 16 12:57:28.759259 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 12:57:28.760465 systemd[1]: Started sshd@0-10.0.0.102:22-10.0.0.1:50728.service - OpenSSH per-connection server daemon (10.0.0.1:50728). Dec 16 12:57:28.841749 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 50728 ssh2: RSA SHA256:Nb0q9jxD4EhyUGvlUh0tlGIiDz42DR960XQ2mSo6eQ4 Dec 16 12:57:28.843445 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:57:28.850055 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 12:57:28.851107 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 12:57:28.855962 systemd-logind[1584]: New session 1 of user core. Dec 16 12:57:28.874057 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 12:57:28.877469 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 12:57:28.899635 (systemd)[1733]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 12:57:28.902272 systemd-logind[1584]: New session c1 of user core. Dec 16 12:57:29.043727 systemd[1733]: Queued start job for default target default.target. Dec 16 12:57:29.063047 systemd[1733]: Created slice app.slice - User Application Slice. Dec 16 12:57:29.063085 systemd[1733]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 16 12:57:29.063100 systemd[1733]: Reached target paths.target - Paths. Dec 16 12:57:29.063145 systemd[1733]: Reached target timers.target - Timers. Dec 16 12:57:29.064590 systemd[1733]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 12:57:29.065534 systemd[1733]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 16 12:57:29.076588 systemd[1733]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 12:57:29.076865 systemd[1733]: Reached target sockets.target - Sockets. Dec 16 12:57:29.078283 systemd[1733]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 16 12:57:29.078404 systemd[1733]: Reached target basic.target - Basic System. Dec 16 12:57:29.078463 systemd[1733]: Reached target default.target - Main User Target. Dec 16 12:57:29.078496 systemd[1733]: Startup finished in 169ms. Dec 16 12:57:29.078714 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 12:57:29.092038 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 12:57:29.115502 systemd[1]: Started sshd@1-10.0.0.102:22-10.0.0.1:50730.service - OpenSSH per-connection server daemon (10.0.0.1:50730). Dec 16 12:57:29.167681 sshd[1746]: Accepted publickey for core from 10.0.0.1 port 50730 ssh2: RSA SHA256:Nb0q9jxD4EhyUGvlUh0tlGIiDz42DR960XQ2mSo6eQ4 Dec 16 12:57:29.169271 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:57:29.174104 systemd-logind[1584]: New session 2 of user core. Dec 16 12:57:29.183994 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 12:57:29.196671 sshd[1749]: Connection closed by 10.0.0.1 port 50730 Dec 16 12:57:29.196987 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Dec 16 12:57:29.207684 systemd[1]: sshd@1-10.0.0.102:22-10.0.0.1:50730.service: Deactivated successfully. Dec 16 12:57:29.210059 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 12:57:29.210896 systemd-logind[1584]: Session 2 logged out. Waiting for processes to exit. Dec 16 12:57:29.214379 systemd[1]: Started sshd@2-10.0.0.102:22-10.0.0.1:50734.service - OpenSSH per-connection server daemon (10.0.0.1:50734). Dec 16 12:57:29.215060 systemd-logind[1584]: Removed session 2. Dec 16 12:57:29.275479 sshd[1755]: Accepted publickey for core from 10.0.0.1 port 50734 ssh2: RSA SHA256:Nb0q9jxD4EhyUGvlUh0tlGIiDz42DR960XQ2mSo6eQ4 Dec 16 12:57:29.276794 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:57:29.281515 systemd-logind[1584]: New session 3 of user core. Dec 16 12:57:29.290971 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 12:57:29.300946 sshd[1758]: Connection closed by 10.0.0.1 port 50734 Dec 16 12:57:29.301241 sshd-session[1755]: pam_unix(sshd:session): session closed for user core Dec 16 12:57:29.309484 systemd[1]: sshd@2-10.0.0.102:22-10.0.0.1:50734.service: Deactivated successfully. Dec 16 12:57:29.311411 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 12:57:29.312249 systemd-logind[1584]: Session 3 logged out. Waiting for processes to exit. Dec 16 12:57:29.314851 systemd[1]: Started sshd@3-10.0.0.102:22-10.0.0.1:50744.service - OpenSSH per-connection server daemon (10.0.0.1:50744). Dec 16 12:57:29.315964 systemd-logind[1584]: Removed session 3. Dec 16 12:57:29.369674 sshd[1764]: Accepted publickey for core from 10.0.0.1 port 50744 ssh2: RSA SHA256:Nb0q9jxD4EhyUGvlUh0tlGIiDz42DR960XQ2mSo6eQ4 Dec 16 12:57:29.371458 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:57:29.376600 systemd-logind[1584]: New session 4 of user core. Dec 16 12:57:29.386042 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 12:57:29.398849 sshd[1767]: Connection closed by 10.0.0.1 port 50744 Dec 16 12:57:29.399178 sshd-session[1764]: pam_unix(sshd:session): session closed for user core Dec 16 12:57:29.407595 systemd[1]: sshd@3-10.0.0.102:22-10.0.0.1:50744.service: Deactivated successfully. Dec 16 12:57:29.409569 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 12:57:29.410336 systemd-logind[1584]: Session 4 logged out. Waiting for processes to exit. Dec 16 12:57:29.413254 systemd[1]: Started sshd@4-10.0.0.102:22-10.0.0.1:50752.service - OpenSSH per-connection server daemon (10.0.0.1:50752). Dec 16 12:57:29.413797 systemd-logind[1584]: Removed session 4. Dec 16 12:57:29.474539 sshd[1773]: Accepted publickey for core from 10.0.0.1 port 50752 ssh2: RSA SHA256:Nb0q9jxD4EhyUGvlUh0tlGIiDz42DR960XQ2mSo6eQ4 Dec 16 12:57:29.475973 sshd-session[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:57:29.480871 systemd-logind[1584]: New session 5 of user core. Dec 16 12:57:29.500016 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 12:57:29.520734 sudo[1777]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 12:57:29.521079 sudo[1777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:57:29.540694 sudo[1777]: pam_unix(sudo:session): session closed for user root Dec 16 12:57:29.542957 sshd[1776]: Connection closed by 10.0.0.1 port 50752 Dec 16 12:57:29.543302 sshd-session[1773]: pam_unix(sshd:session): session closed for user core Dec 16 12:57:29.558843 systemd[1]: sshd@4-10.0.0.102:22-10.0.0.1:50752.service: Deactivated successfully. Dec 16 12:57:29.560800 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 12:57:29.561670 systemd-logind[1584]: Session 5 logged out. Waiting for processes to exit. Dec 16 12:57:29.564778 systemd[1]: Started sshd@5-10.0.0.102:22-10.0.0.1:50758.service - OpenSSH per-connection server daemon (10.0.0.1:50758). Dec 16 12:57:29.565532 systemd-logind[1584]: Removed session 5. Dec 16 12:57:29.618866 sshd[1783]: Accepted publickey for core from 10.0.0.1 port 50758 ssh2: RSA SHA256:Nb0q9jxD4EhyUGvlUh0tlGIiDz42DR960XQ2mSo6eQ4 Dec 16 12:57:29.620595 sshd-session[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:57:29.625377 systemd-logind[1584]: New session 6 of user core. Dec 16 12:57:29.634962 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 12:57:29.650420 sudo[1789]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 12:57:29.650735 sudo[1789]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:57:29.715399 sudo[1789]: pam_unix(sudo:session): session closed for user root Dec 16 12:57:29.723843 sudo[1788]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 12:57:29.724226 sudo[1788]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:57:29.735181 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:57:29.776000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 12:57:29.777886 augenrules[1811]: No rules Dec 16 12:57:29.779868 kernel: kauditd_printk_skb: 106 callbacks suppressed Dec 16 12:57:29.779923 kernel: audit: type=1305 audit(1765889849.776:196): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 12:57:29.779695 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:57:29.780043 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:57:29.781248 sudo[1788]: pam_unix(sudo:session): session closed for user root Dec 16 12:57:29.776000 audit[1811]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd6d49f8c0 a2=420 a3=0 items=0 ppid=1792 pid=1811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:29.783006 sshd[1787]: Connection closed by 10.0.0.1 port 50758 Dec 16 12:57:29.783361 sshd-session[1783]: pam_unix(sshd:session): session closed for user core Dec 16 12:57:29.788319 kernel: audit: type=1300 audit(1765889849.776:196): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd6d49f8c0 a2=420 a3=0 items=0 ppid=1792 pid=1811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:29.788368 kernel: audit: type=1327 audit(1765889849.776:196): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:57:29.776000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 12:57:29.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:29.795382 kernel: audit: type=1130 audit(1765889849.779:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:29.795410 kernel: audit: type=1131 audit(1765889849.779:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:29.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:29.799821 kernel: audit: type=1106 audit(1765889849.780:199): pid=1788 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:57:29.780000 audit[1788]: USER_END pid=1788 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:57:29.804803 kernel: audit: type=1104 audit(1765889849.780:200): pid=1788 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:57:29.780000 audit[1788]: CRED_DISP pid=1788 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:57:29.784000 audit[1783]: USER_END pid=1783 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:57:29.814319 kernel: audit: type=1106 audit(1765889849.784:201): pid=1783 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:57:29.814353 kernel: audit: type=1104 audit(1765889849.784:202): pid=1783 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:57:29.784000 audit[1783]: CRED_DISP pid=1783 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:57:29.826811 systemd[1]: sshd@5-10.0.0.102:22-10.0.0.1:50758.service: Deactivated successfully. Dec 16 12:57:29.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.102:22-10.0.0.1:50758 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:29.828754 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 12:57:29.829851 systemd-logind[1584]: Session 6 logged out. Waiting for processes to exit. Dec 16 12:57:29.831588 systemd-logind[1584]: Removed session 6. Dec 16 12:57:29.832676 systemd[1]: Started sshd@6-10.0.0.102:22-10.0.0.1:50768.service - OpenSSH per-connection server daemon (10.0.0.1:50768). Dec 16 12:57:29.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.102:22-10.0.0.1:50768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:29.832897 kernel: audit: type=1131 audit(1765889849.826:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.102:22-10.0.0.1:50758 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:29.887000 audit[1820]: USER_ACCT pid=1820 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:57:29.888995 sshd[1820]: Accepted publickey for core from 10.0.0.1 port 50768 ssh2: RSA SHA256:Nb0q9jxD4EhyUGvlUh0tlGIiDz42DR960XQ2mSo6eQ4 Dec 16 12:57:29.888000 audit[1820]: CRED_ACQ pid=1820 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:57:29.888000 audit[1820]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff75f6ce80 a2=3 a3=0 items=0 ppid=1 pid=1820 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:29.888000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:57:29.890207 sshd-session[1820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:57:29.894671 systemd-logind[1584]: New session 7 of user core. Dec 16 12:57:29.910978 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 12:57:29.912000 audit[1820]: USER_START pid=1820 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:57:29.913000 audit[1823]: CRED_ACQ pid=1823 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:57:29.923000 audit[1824]: USER_ACCT pid=1824 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:57:29.924442 sudo[1824]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 12:57:29.923000 audit[1824]: CRED_REFR pid=1824 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:57:29.924754 sudo[1824]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:57:29.925000 audit[1824]: USER_START pid=1824 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:57:30.301057 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 12:57:30.318163 (dockerd)[1845]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 12:57:30.586124 dockerd[1845]: time="2025-12-16T12:57:30.585993977Z" level=info msg="Starting up" Dec 16 12:57:30.587094 dockerd[1845]: time="2025-12-16T12:57:30.586843139Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 12:57:30.598794 dockerd[1845]: time="2025-12-16T12:57:30.598739008Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 12:57:31.982604 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 12:57:31.984354 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:57:32.002695 dockerd[1845]: time="2025-12-16T12:57:32.002644322Z" level=info msg="Loading containers: start." Dec 16 12:57:32.558719 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:57:32.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:32.563290 (kubelet)[1878]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:57:32.587846 kernel: Initializing XFRM netlink socket Dec 16 12:57:32.610504 kubelet[1878]: E1216 12:57:32.610454 1878 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:57:32.618895 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:57:32.619135 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:57:32.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:57:32.619662 systemd[1]: kubelet.service: Consumed 241ms CPU time, 111.4M memory peak. Dec 16 12:57:32.655000 audit[1915]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1915 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:32.655000 audit[1915]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff37580560 a2=0 a3=0 items=0 ppid=1845 pid=1915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:32.655000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 12:57:32.658000 audit[1917]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1917 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:32.658000 audit[1917]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffffd21bc40 a2=0 a3=0 items=0 ppid=1845 pid=1917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:32.658000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 12:57:32.660000 audit[1919]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1919 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:32.660000 audit[1919]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcf8948910 a2=0 a3=0 items=0 ppid=1845 pid=1919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:32.660000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 12:57:32.663000 audit[1921]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1921 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:32.663000 audit[1921]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd78452400 a2=0 a3=0 items=0 ppid=1845 pid=1921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:32.663000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 12:57:32.665000 audit[1923]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1923 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:32.665000 audit[1923]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff398088e0 a2=0 a3=0 items=0 ppid=1845 pid=1923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:32.665000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 12:57:32.668000 audit[1925]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1925 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:32.668000 audit[1925]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffecc9d7ce0 a2=0 a3=0 items=0 ppid=1845 pid=1925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:32.668000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:57:32.670000 audit[1927]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1927 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:32.670000 audit[1927]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe09a93c90 a2=0 a3=0 items=0 ppid=1845 pid=1927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:32.670000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:57:32.673000 audit[1929]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1929 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:32.673000 audit[1929]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe7baa8710 a2=0 a3=0 items=0 ppid=1845 pid=1929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:32.673000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 12:57:32.707000 audit[1932]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1932 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:32.707000 audit[1932]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7fffd9036b00 a2=0 a3=0 items=0 ppid=1845 pid=1932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:32.707000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 16 12:57:32.710000 audit[1934]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1934 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:32.710000 audit[1934]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff015b1f40 a2=0 a3=0 items=0 ppid=1845 pid=1934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:32.710000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 12:57:32.712000 audit[1936]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1936 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:32.712000 audit[1936]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffe72d6f080 a2=0 a3=0 items=0 ppid=1845 pid=1936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:32.712000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 12:57:32.716000 audit[1938]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1938 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:32.716000 audit[1938]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffcac9560c0 a2=0 a3=0 items=0 ppid=1845 pid=1938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:32.716000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:57:32.719000 audit[1940]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1940 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:32.719000 audit[1940]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fffa87644a0 a2=0 a3=0 items=0 ppid=1845 pid=1940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:32.719000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 12:57:32.765000 audit[1970]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1970 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:32.765000 audit[1970]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffdd9601780 a2=0 a3=0 items=0 ppid=1845 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:32.765000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 12:57:32.768000 audit[1972]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1972 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:32.768000 audit[1972]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffcc369cca0 a2=0 a3=0 items=0 ppid=1845 pid=1972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:32.768000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 12:57:32.771000 audit[1974]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1974 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:32.771000 audit[1974]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffee74c5ce0 a2=0 a3=0 items=0 ppid=1845 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:32.771000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 12:57:32.773000 audit[1976]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1976 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:32.773000 audit[1976]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdfaae8a30 a2=0 a3=0 items=0 ppid=1845 pid=1976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:32.773000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 12:57:32.776000 audit[1978]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1978 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:32.776000 audit[1978]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffecfc91cf0 a2=0 a3=0 items=0 ppid=1845 pid=1978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:32.776000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 12:57:32.779000 audit[1980]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1980 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:32.779000 audit[1980]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc583adc10 a2=0 a3=0 items=0 ppid=1845 pid=1980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:32.779000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:57:32.782000 audit[1982]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1982 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:32.782000 audit[1982]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffc5fa8fe0 a2=0 a3=0 items=0 ppid=1845 pid=1982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:32.782000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:57:32.786000 audit[1984]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1984 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:32.786000 audit[1984]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fffbb5af060 a2=0 a3=0 items=0 ppid=1845 pid=1984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:32.786000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 12:57:32.789000 audit[1986]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1986 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:32.789000 audit[1986]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7fff51a912a0 a2=0 a3=0 items=0 ppid=1845 pid=1986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:32.789000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 16 12:57:32.792000 audit[1988]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1988 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:32.792000 audit[1988]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff1c53ecb0 a2=0 a3=0 items=0 ppid=1845 pid=1988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:32.792000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 12:57:32.795000 audit[1990]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1990 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:32.795000 audit[1990]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffed7901a90 a2=0 a3=0 items=0 ppid=1845 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:32.795000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 12:57:32.797000 audit[1992]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1992 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:32.797000 audit[1992]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffeee9afe70 a2=0 a3=0 items=0 ppid=1845 pid=1992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:32.797000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 12:57:32.800000 audit[1994]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1994 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:32.800000 audit[1994]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffda2afc2c0 a2=0 a3=0 items=0 ppid=1845 pid=1994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:32.800000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 12:57:32.806000 audit[1999]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1999 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:32.806000 audit[1999]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffefed89770 a2=0 a3=0 items=0 ppid=1845 pid=1999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:32.806000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 12:57:32.809000 audit[2001]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2001 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:32.809000 audit[2001]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd35deace0 a2=0 a3=0 items=0 ppid=1845 pid=2001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:32.809000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 12:57:32.811000 audit[2003]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2003 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:32.811000 audit[2003]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff09a87260 a2=0 a3=0 items=0 ppid=1845 pid=2003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:32.811000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 12:57:32.815000 audit[2005]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2005 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:32.815000 audit[2005]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff3493cb90 a2=0 a3=0 items=0 ppid=1845 pid=2005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:32.815000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 12:57:32.817000 audit[2007]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2007 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:32.817000 audit[2007]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe8977f390 a2=0 a3=0 items=0 ppid=1845 pid=2007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:32.817000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 12:57:32.820000 audit[2009]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2009 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:32.820000 audit[2009]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd762ea3e0 a2=0 a3=0 items=0 ppid=1845 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:32.820000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 12:57:33.213000 audit[2013]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2013 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:33.213000 audit[2013]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffc9fd13a30 a2=0 a3=0 items=0 ppid=1845 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:33.213000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 16 12:57:33.216000 audit[2016]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2016 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:33.216000 audit[2016]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fff6c330600 a2=0 a3=0 items=0 ppid=1845 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:33.216000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 16 12:57:33.227000 audit[2024]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:33.227000 audit[2024]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffffbb2aea0 a2=0 a3=0 items=0 ppid=1845 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:33.227000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 16 12:57:33.237000 audit[2030]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:33.237000 audit[2030]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffdf708fd40 a2=0 a3=0 items=0 ppid=1845 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:33.237000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 16 12:57:33.241000 audit[2032]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2032 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:33.241000 audit[2032]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7fff90f3bef0 a2=0 a3=0 items=0 ppid=1845 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:33.241000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 16 12:57:33.243000 audit[2034]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2034 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:33.243000 audit[2034]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe8c377b90 a2=0 a3=0 items=0 ppid=1845 pid=2034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:33.243000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 16 12:57:33.246000 audit[2036]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:33.246000 audit[2036]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fff5c46a1e0 a2=0 a3=0 items=0 ppid=1845 pid=2036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:33.246000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 12:57:33.249000 audit[2038]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:33.249000 audit[2038]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd2d5a78b0 a2=0 a3=0 items=0 ppid=1845 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:33.249000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 16 12:57:33.250403 systemd-networkd[1515]: docker0: Link UP Dec 16 12:57:33.258059 dockerd[1845]: time="2025-12-16T12:57:33.257987078Z" level=info msg="Loading containers: done." Dec 16 12:57:33.273092 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4101386001-merged.mount: Deactivated successfully. Dec 16 12:57:33.280169 dockerd[1845]: time="2025-12-16T12:57:33.280116138Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 12:57:33.280246 dockerd[1845]: time="2025-12-16T12:57:33.280215775Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 12:57:33.280325 dockerd[1845]: time="2025-12-16T12:57:33.280299752Z" level=info msg="Initializing buildkit" Dec 16 12:57:33.315551 dockerd[1845]: time="2025-12-16T12:57:33.315504514Z" level=info msg="Completed buildkit initialization" Dec 16 12:57:33.322741 dockerd[1845]: time="2025-12-16T12:57:33.322693275Z" level=info msg="Daemon has completed initialization" Dec 16 12:57:33.322905 dockerd[1845]: time="2025-12-16T12:57:33.322778144Z" level=info msg="API listen on /run/docker.sock" Dec 16 12:57:33.323050 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 12:57:33.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:33.983550 containerd[1605]: time="2025-12-16T12:57:33.983479355Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 16 12:57:34.961397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1863615053.mount: Deactivated successfully. Dec 16 12:57:35.755494 containerd[1605]: time="2025-12-16T12:57:35.755417050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:57:35.756236 containerd[1605]: time="2025-12-16T12:57:35.756202393Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=28445968" Dec 16 12:57:35.757413 containerd[1605]: time="2025-12-16T12:57:35.757377006Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:57:35.759654 containerd[1605]: time="2025-12-16T12:57:35.759619371Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:57:35.760498 containerd[1605]: time="2025-12-16T12:57:35.760446132Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 1.776905131s" Dec 16 12:57:35.760498 containerd[1605]: time="2025-12-16T12:57:35.760482550Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Dec 16 12:57:35.760970 containerd[1605]: time="2025-12-16T12:57:35.760945919Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 16 12:57:37.037004 containerd[1605]: time="2025-12-16T12:57:37.036905709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:57:37.037698 containerd[1605]: time="2025-12-16T12:57:37.037640207Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26008626" Dec 16 12:57:37.038833 containerd[1605]: time="2025-12-16T12:57:37.038777399Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:57:37.041484 containerd[1605]: time="2025-12-16T12:57:37.041447246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:57:37.042322 containerd[1605]: time="2025-12-16T12:57:37.042264168Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.281291528s" Dec 16 12:57:37.042322 containerd[1605]: time="2025-12-16T12:57:37.042307930Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Dec 16 12:57:37.042932 containerd[1605]: time="2025-12-16T12:57:37.042897476Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 16 12:57:39.697742 containerd[1605]: time="2025-12-16T12:57:39.697654262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:57:39.698682 containerd[1605]: time="2025-12-16T12:57:39.698660329Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20149965" Dec 16 12:57:39.700042 containerd[1605]: time="2025-12-16T12:57:39.699988530Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:57:39.702517 containerd[1605]: time="2025-12-16T12:57:39.702480644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:57:39.703565 containerd[1605]: time="2025-12-16T12:57:39.703531454Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 2.660600585s" Dec 16 12:57:39.703606 containerd[1605]: time="2025-12-16T12:57:39.703565568Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Dec 16 12:57:39.704047 containerd[1605]: time="2025-12-16T12:57:39.704017676Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 16 12:57:40.959837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2288513858.mount: Deactivated successfully. Dec 16 12:57:41.759875 containerd[1605]: time="2025-12-16T12:57:41.759763523Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:57:41.760619 containerd[1605]: time="2025-12-16T12:57:41.760550680Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31926374" Dec 16 12:57:41.761846 containerd[1605]: time="2025-12-16T12:57:41.761725092Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:57:41.764752 containerd[1605]: time="2025-12-16T12:57:41.764692136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:57:41.765326 containerd[1605]: time="2025-12-16T12:57:41.765290378Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 2.06104737s" Dec 16 12:57:41.765393 containerd[1605]: time="2025-12-16T12:57:41.765324362Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Dec 16 12:57:41.765942 containerd[1605]: time="2025-12-16T12:57:41.765883611Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 16 12:57:42.732640 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 12:57:42.734389 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:57:42.926966 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:57:42.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:42.928182 kernel: kauditd_printk_skb: 134 callbacks suppressed Dec 16 12:57:42.928251 kernel: audit: type=1130 audit(1765889862.926:256): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:42.932796 (kubelet)[2164]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:57:43.573214 kubelet[2164]: E1216 12:57:43.573120 2164 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:57:43.578632 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:57:43.578876 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:57:43.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:57:43.579330 systemd[1]: kubelet.service: Consumed 246ms CPU time, 109.7M memory peak. Dec 16 12:57:43.584887 kernel: audit: type=1131 audit(1765889863.578:257): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:57:44.002668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3608100951.mount: Deactivated successfully. Dec 16 12:57:45.464975 containerd[1605]: time="2025-12-16T12:57:45.464891050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:57:45.466868 containerd[1605]: time="2025-12-16T12:57:45.466844604Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20810403" Dec 16 12:57:45.470381 containerd[1605]: time="2025-12-16T12:57:45.470345589Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:57:45.481966 containerd[1605]: time="2025-12-16T12:57:45.481922710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:57:45.483066 containerd[1605]: time="2025-12-16T12:57:45.483002405Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 3.71707354s" Dec 16 12:57:45.483066 containerd[1605]: time="2025-12-16T12:57:45.483062708Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Dec 16 12:57:45.483719 containerd[1605]: time="2025-12-16T12:57:45.483612790Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 12:57:46.251927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1019707450.mount: Deactivated successfully. Dec 16 12:57:46.259196 containerd[1605]: time="2025-12-16T12:57:46.259136103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:57:46.260021 containerd[1605]: time="2025-12-16T12:57:46.259962212Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 12:57:46.261155 containerd[1605]: time="2025-12-16T12:57:46.261110426Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:57:46.263233 containerd[1605]: time="2025-12-16T12:57:46.263182161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:57:46.263884 containerd[1605]: time="2025-12-16T12:57:46.263842629Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 780.188692ms" Dec 16 12:57:46.263884 containerd[1605]: time="2025-12-16T12:57:46.263871804Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 16 12:57:46.264497 containerd[1605]: time="2025-12-16T12:57:46.264311749Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 16 12:57:47.839714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2482004527.mount: Deactivated successfully. Dec 16 12:57:51.194225 containerd[1605]: time="2025-12-16T12:57:51.194118986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:57:51.230512 containerd[1605]: time="2025-12-16T12:57:51.230412439Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58915995" Dec 16 12:57:51.295325 containerd[1605]: time="2025-12-16T12:57:51.295266586Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:57:51.322634 containerd[1605]: time="2025-12-16T12:57:51.322574896Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:57:51.323779 containerd[1605]: time="2025-12-16T12:57:51.323729621Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 5.05937992s" Dec 16 12:57:51.323851 containerd[1605]: time="2025-12-16T12:57:51.323786862Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Dec 16 12:57:53.732714 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 12:57:53.734486 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:57:53.962174 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:57:53.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:53.967895 kernel: audit: type=1130 audit(1765889873.961:258): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:53.976097 (kubelet)[2320]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:57:54.010388 kubelet[2320]: E1216 12:57:54.010219 2320 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:57:54.015642 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:57:54.015972 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:57:54.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:57:54.016518 systemd[1]: kubelet.service: Consumed 214ms CPU time, 112.4M memory peak. Dec 16 12:57:54.021858 kernel: audit: type=1131 audit(1765889874.015:259): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:57:55.086087 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:57:55.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:55.086362 systemd[1]: kubelet.service: Consumed 214ms CPU time, 112.4M memory peak. Dec 16 12:57:55.088635 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:57:55.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:55.095462 kernel: audit: type=1130 audit(1765889875.085:260): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:55.095513 kernel: audit: type=1131 audit(1765889875.085:261): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:55.114305 systemd[1]: Reload requested from client PID 2336 ('systemctl') (unit session-7.scope)... Dec 16 12:57:55.114319 systemd[1]: Reloading... Dec 16 12:57:55.197858 zram_generator::config[2387]: No configuration found. Dec 16 12:57:55.939076 systemd[1]: Reloading finished in 824 ms. Dec 16 12:57:55.970000 audit: BPF prog-id=51 op=LOAD Dec 16 12:57:55.970000 audit: BPF prog-id=48 op=UNLOAD Dec 16 12:57:55.974566 kernel: audit: type=1334 audit(1765889875.970:262): prog-id=51 op=LOAD Dec 16 12:57:55.974626 kernel: audit: type=1334 audit(1765889875.970:263): prog-id=48 op=UNLOAD Dec 16 12:57:55.974676 kernel: audit: type=1334 audit(1765889875.970:264): prog-id=52 op=LOAD Dec 16 12:57:55.970000 audit: BPF prog-id=52 op=LOAD Dec 16 12:57:55.975985 kernel: audit: type=1334 audit(1765889875.970:265): prog-id=53 op=LOAD Dec 16 12:57:55.976018 kernel: audit: type=1334 audit(1765889875.970:266): prog-id=49 op=UNLOAD Dec 16 12:57:55.976042 kernel: audit: type=1334 audit(1765889875.970:267): prog-id=50 op=UNLOAD Dec 16 12:57:55.970000 audit: BPF prog-id=53 op=LOAD Dec 16 12:57:55.970000 audit: BPF prog-id=49 op=UNLOAD Dec 16 12:57:55.970000 audit: BPF prog-id=50 op=UNLOAD Dec 16 12:57:55.972000 audit: BPF prog-id=54 op=LOAD Dec 16 12:57:55.972000 audit: BPF prog-id=43 op=UNLOAD Dec 16 12:57:55.973000 audit: BPF prog-id=55 op=LOAD Dec 16 12:57:55.973000 audit: BPF prog-id=34 op=UNLOAD Dec 16 12:57:55.973000 audit: BPF prog-id=56 op=LOAD Dec 16 12:57:55.973000 audit: BPF prog-id=57 op=LOAD Dec 16 12:57:55.973000 audit: BPF prog-id=35 op=UNLOAD Dec 16 12:57:55.973000 audit: BPF prog-id=36 op=UNLOAD Dec 16 12:57:55.974000 audit: BPF prog-id=58 op=LOAD Dec 16 12:57:55.974000 audit: BPF prog-id=59 op=LOAD Dec 16 12:57:55.974000 audit: BPF prog-id=45 op=UNLOAD Dec 16 12:57:55.974000 audit: BPF prog-id=46 op=UNLOAD Dec 16 12:57:55.975000 audit: BPF prog-id=60 op=LOAD Dec 16 12:57:55.975000 audit: BPF prog-id=47 op=UNLOAD Dec 16 12:57:55.976000 audit: BPF prog-id=61 op=LOAD Dec 16 12:57:55.976000 audit: BPF prog-id=40 op=UNLOAD Dec 16 12:57:55.976000 audit: BPF prog-id=62 op=LOAD Dec 16 12:57:55.976000 audit: BPF prog-id=63 op=LOAD Dec 16 12:57:55.976000 audit: BPF prog-id=41 op=UNLOAD Dec 16 12:57:55.976000 audit: BPF prog-id=42 op=UNLOAD Dec 16 12:57:55.977000 audit: BPF prog-id=64 op=LOAD Dec 16 12:57:55.977000 audit: BPF prog-id=37 op=UNLOAD Dec 16 12:57:55.977000 audit: BPF prog-id=65 op=LOAD Dec 16 12:57:55.977000 audit: BPF prog-id=66 op=LOAD Dec 16 12:57:55.977000 audit: BPF prog-id=38 op=UNLOAD Dec 16 12:57:55.977000 audit: BPF prog-id=39 op=UNLOAD Dec 16 12:57:55.979000 audit: BPF prog-id=67 op=LOAD Dec 16 12:57:55.979000 audit: BPF prog-id=31 op=UNLOAD Dec 16 12:57:55.979000 audit: BPF prog-id=68 op=LOAD Dec 16 12:57:55.979000 audit: BPF prog-id=69 op=LOAD Dec 16 12:57:55.979000 audit: BPF prog-id=32 op=UNLOAD Dec 16 12:57:55.979000 audit: BPF prog-id=33 op=UNLOAD Dec 16 12:57:55.980000 audit: BPF prog-id=70 op=LOAD Dec 16 12:57:55.980000 audit: BPF prog-id=44 op=UNLOAD Dec 16 12:57:56.018502 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 12:57:56.018610 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 12:57:56.018968 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:57:56.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 12:57:56.019043 systemd[1]: kubelet.service: Consumed 161ms CPU time, 98.3M memory peak. Dec 16 12:57:56.020633 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:57:56.227270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:57:56.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:57:56.247378 (kubelet)[2429]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:57:56.284105 kubelet[2429]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:57:56.284105 kubelet[2429]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:57:56.284105 kubelet[2429]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:57:56.284480 kubelet[2429]: I1216 12:57:56.284142 2429 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:57:56.884487 kubelet[2429]: I1216 12:57:56.884422 2429 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 12:57:56.884487 kubelet[2429]: I1216 12:57:56.884464 2429 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:57:56.884751 kubelet[2429]: I1216 12:57:56.884727 2429 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:57:56.937144 kubelet[2429]: E1216 12:57:56.937101 2429 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.102:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 12:57:56.937491 kubelet[2429]: I1216 12:57:56.937369 2429 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:57:56.943449 kubelet[2429]: I1216 12:57:56.943425 2429 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:57:56.950061 kubelet[2429]: I1216 12:57:56.950025 2429 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:57:56.950389 kubelet[2429]: I1216 12:57:56.950351 2429 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:57:56.950569 kubelet[2429]: I1216 12:57:56.950385 2429 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:57:56.950681 kubelet[2429]: I1216 12:57:56.950570 2429 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:57:56.950681 kubelet[2429]: I1216 12:57:56.950579 2429 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 12:57:56.951357 kubelet[2429]: I1216 12:57:56.951333 2429 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:57:56.953577 kubelet[2429]: I1216 12:57:56.953550 2429 kubelet.go:480] "Attempting to sync node with API server" Dec 16 12:57:56.953577 kubelet[2429]: I1216 12:57:56.953577 2429 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:57:56.955303 kubelet[2429]: I1216 12:57:56.955279 2429 kubelet.go:386] "Adding apiserver pod source" Dec 16 12:57:56.955303 kubelet[2429]: I1216 12:57:56.955306 2429 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:57:56.957118 kubelet[2429]: E1216 12:57:56.957076 2429 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:57:56.957289 kubelet[2429]: E1216 12:57:56.957260 2429 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:57:56.958797 kubelet[2429]: I1216 12:57:56.958766 2429 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 12:57:56.959240 kubelet[2429]: I1216 12:57:56.959222 2429 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:57:56.960306 kubelet[2429]: W1216 12:57:56.960281 2429 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 12:57:56.962842 kubelet[2429]: I1216 12:57:56.962799 2429 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:57:56.962908 kubelet[2429]: I1216 12:57:56.962867 2429 server.go:1289] "Started kubelet" Dec 16 12:57:56.964992 kubelet[2429]: I1216 12:57:56.964925 2429 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:57:56.966493 kubelet[2429]: I1216 12:57:56.966175 2429 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:57:56.966493 kubelet[2429]: I1216 12:57:56.966260 2429 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:57:56.966493 kubelet[2429]: I1216 12:57:56.966308 2429 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:57:56.967507 kubelet[2429]: I1216 12:57:56.967484 2429 server.go:317] "Adding debug handlers to kubelet server" Dec 16 12:57:56.968501 kubelet[2429]: I1216 12:57:56.968482 2429 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:57:56.968606 kubelet[2429]: I1216 12:57:56.968574 2429 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:57:56.969945 kubelet[2429]: E1216 12:57:56.969914 2429 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:57:56.970043 kubelet[2429]: E1216 12:57:56.970014 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.102:6443: connect: connection refused" interval="200ms" Dec 16 12:57:56.970372 kubelet[2429]: E1216 12:57:56.970334 2429 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:57:56.971288 kubelet[2429]: I1216 12:57:56.971174 2429 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:57:56.971288 kubelet[2429]: I1216 12:57:56.971199 2429 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:57:56.972407 kubelet[2429]: I1216 12:57:56.972364 2429 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:57:56.972788 kubelet[2429]: I1216 12:57:56.972477 2429 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:57:56.972938 kubelet[2429]: I1216 12:57:56.972877 2429 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:57:56.974609 kubelet[2429]: E1216 12:57:56.973399 2429 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.102:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.102:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1881b37cacd3ba1c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-16 12:57:56.962818588 +0000 UTC m=+0.710245372,LastTimestamp:2025-12-16 12:57:56.962818588 +0000 UTC m=+0.710245372,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 16 12:57:56.974000 audit[2447]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2447 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:56.974000 audit[2447]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffec7688e30 a2=0 a3=0 items=0 ppid=2429 pid=2447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:56.974000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 12:57:56.976000 audit[2448]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2448 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:56.976000 audit[2448]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd814e2c00 a2=0 a3=0 items=0 ppid=2429 pid=2448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:56.976000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 12:57:56.979000 audit[2450]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2450 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:56.979000 audit[2450]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd903725a0 a2=0 a3=0 items=0 ppid=2429 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:56.979000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:57:56.982000 audit[2453]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2453 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:56.982000 audit[2453]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe98a8fdd0 a2=0 a3=0 items=0 ppid=2429 pid=2453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:56.982000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:57:56.988075 kubelet[2429]: I1216 12:57:56.988043 2429 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:57:56.988075 kubelet[2429]: I1216 12:57:56.988061 2429 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:57:56.988075 kubelet[2429]: I1216 12:57:56.988080 2429 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:57:56.989000 audit[2458]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2458 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:56.989000 audit[2458]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffe38f4f6b0 a2=0 a3=0 items=0 ppid=2429 pid=2458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:56.989000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 16 12:57:56.991786 kubelet[2429]: I1216 12:57:56.991743 2429 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 12:57:56.991000 audit[2459]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2459 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:56.991000 audit[2459]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdcf7158e0 a2=0 a3=0 items=0 ppid=2429 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:56.991000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 12:57:56.993333 kubelet[2429]: I1216 12:57:56.993310 2429 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 12:57:56.993371 kubelet[2429]: I1216 12:57:56.993339 2429 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 12:57:56.993371 kubelet[2429]: I1216 12:57:56.993363 2429 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:57:56.993432 kubelet[2429]: I1216 12:57:56.993373 2429 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 12:57:56.993432 kubelet[2429]: E1216 12:57:56.993420 2429 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:57:56.991000 audit[2460]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2460 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:56.991000 audit[2460]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd292774d0 a2=0 a3=0 items=0 ppid=2429 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:56.991000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 12:57:56.994475 kubelet[2429]: E1216 12:57:56.994446 2429 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:57:56.992000 audit[2462]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2462 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:56.992000 audit[2462]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd5233ad90 a2=0 a3=0 items=0 ppid=2429 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:56.992000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 12:57:56.995024 kubelet[2429]: I1216 12:57:56.994913 2429 policy_none.go:49] "None policy: Start" Dec 16 12:57:56.995024 kubelet[2429]: I1216 12:57:56.994929 2429 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:57:56.995024 kubelet[2429]: I1216 12:57:56.994942 2429 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:57:56.994000 audit[2464]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2464 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:56.994000 audit[2464]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd35ea0650 a2=0 a3=0 items=0 ppid=2429 pid=2464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:56.994000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 12:57:56.994000 audit[2465]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2465 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:56.994000 audit[2465]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcebaea2e0 a2=0 a3=0 items=0 ppid=2429 pid=2465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:56.994000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 12:57:56.995000 audit[2466]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2466 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:57:56.995000 audit[2466]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd18c421f0 a2=0 a3=0 items=0 ppid=2429 pid=2466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:56.995000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 12:57:56.996000 audit[2467]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2467 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:57:56.996000 audit[2467]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffee3b57040 a2=0 a3=0 items=0 ppid=2429 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:56.996000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 12:57:57.002596 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 12:57:57.015040 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 12:57:57.018586 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 12:57:57.037057 kubelet[2429]: E1216 12:57:57.037010 2429 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:57:57.037342 kubelet[2429]: I1216 12:57:57.037314 2429 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:57:57.037391 kubelet[2429]: I1216 12:57:57.037328 2429 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:57:57.037601 kubelet[2429]: I1216 12:57:57.037520 2429 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:57:57.038771 kubelet[2429]: E1216 12:57:57.038755 2429 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:57:57.038891 kubelet[2429]: E1216 12:57:57.038834 2429 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 16 12:57:57.106155 systemd[1]: Created slice kubepods-burstable-pod7bc3d8f172bc95168592c6e4c86a9b90.slice - libcontainer container kubepods-burstable-pod7bc3d8f172bc95168592c6e4c86a9b90.slice. Dec 16 12:57:57.117237 kubelet[2429]: E1216 12:57:57.117192 2429 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:57:57.121974 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Dec 16 12:57:57.139291 kubelet[2429]: I1216 12:57:57.139162 2429 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:57:57.139551 kubelet[2429]: E1216 12:57:57.139526 2429 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.102:6443/api/v1/nodes\": dial tcp 10.0.0.102:6443: connect: connection refused" node="localhost" Dec 16 12:57:57.142556 kubelet[2429]: E1216 12:57:57.142514 2429 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:57:57.145649 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Dec 16 12:57:57.147486 kubelet[2429]: E1216 12:57:57.147467 2429 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:57:57.171105 kubelet[2429]: E1216 12:57:57.171065 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.102:6443: connect: connection refused" interval="400ms" Dec 16 12:57:57.272712 kubelet[2429]: I1216 12:57:57.272642 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:57:57.272712 kubelet[2429]: I1216 12:57:57.272687 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:57:57.272712 kubelet[2429]: I1216 12:57:57.272705 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:57:57.272712 kubelet[2429]: I1216 12:57:57.272719 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:57:57.273006 kubelet[2429]: I1216 12:57:57.272734 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7bc3d8f172bc95168592c6e4c86a9b90-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7bc3d8f172bc95168592c6e4c86a9b90\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:57:57.273006 kubelet[2429]: I1216 12:57:57.272791 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:57:57.273006 kubelet[2429]: I1216 12:57:57.272813 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 16 12:57:57.273006 kubelet[2429]: I1216 12:57:57.272916 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7bc3d8f172bc95168592c6e4c86a9b90-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7bc3d8f172bc95168592c6e4c86a9b90\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:57:57.273006 kubelet[2429]: I1216 12:57:57.272976 2429 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7bc3d8f172bc95168592c6e4c86a9b90-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7bc3d8f172bc95168592c6e4c86a9b90\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:57:57.341070 kubelet[2429]: I1216 12:57:57.341039 2429 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:57:57.341534 kubelet[2429]: E1216 12:57:57.341380 2429 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.102:6443/api/v1/nodes\": dial tcp 10.0.0.102:6443: connect: connection refused" node="localhost" Dec 16 12:57:57.418662 kubelet[2429]: E1216 12:57:57.418498 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:57:57.419319 containerd[1605]: time="2025-12-16T12:57:57.419268235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7bc3d8f172bc95168592c6e4c86a9b90,Namespace:kube-system,Attempt:0,}" Dec 16 12:57:57.443757 kubelet[2429]: E1216 12:57:57.443697 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:57:57.444364 containerd[1605]: time="2025-12-16T12:57:57.444308300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Dec 16 12:57:57.448600 kubelet[2429]: E1216 12:57:57.448568 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:57:57.449062 containerd[1605]: time="2025-12-16T12:57:57.449022246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Dec 16 12:57:57.571645 kubelet[2429]: E1216 12:57:57.571576 2429 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.102:6443: connect: connection refused" interval="800ms" Dec 16 12:57:57.743390 kubelet[2429]: I1216 12:57:57.743367 2429 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:57:57.743796 kubelet[2429]: E1216 12:57:57.743741 2429 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.102:6443/api/v1/nodes\": dial tcp 10.0.0.102:6443: connect: connection refused" node="localhost" Dec 16 12:57:57.892621 kubelet[2429]: E1216 12:57:57.892540 2429 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:57:58.009576 containerd[1605]: time="2025-12-16T12:57:58.009277501Z" level=info msg="connecting to shim 2b5b2bf1f5bbf5ec557d6d4d9771d054e5e7a0a2ed78e55d37211ca99948c053" address="unix:///run/containerd/s/e73027036320173c8c6bfa40141bd681590f67e4fbca89f4d203c5df101cc3a4" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:57:58.012446 containerd[1605]: time="2025-12-16T12:57:58.011974647Z" level=info msg="connecting to shim 857dc5fe01ae35aefc3053af28005f382296180520ff08a88492740748e05580" address="unix:///run/containerd/s/72306d78735cdd864194ede544afb195d10f7d38c0ec08f7b5841c28d848951b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:57:58.013080 containerd[1605]: time="2025-12-16T12:57:58.013036473Z" level=info msg="connecting to shim c5fdf3b2f51aa8b14bdb29c300dfdf891e212cf5a8d253dfca84e4305829bdde" address="unix:///run/containerd/s/304fb20cc66c06d917eae1e53f1d6f036e287cebe893dfb27e6af1e58d979b5b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:57:58.042996 systemd[1]: Started cri-containerd-857dc5fe01ae35aefc3053af28005f382296180520ff08a88492740748e05580.scope - libcontainer container 857dc5fe01ae35aefc3053af28005f382296180520ff08a88492740748e05580. Dec 16 12:57:58.049036 systemd[1]: Started cri-containerd-2b5b2bf1f5bbf5ec557d6d4d9771d054e5e7a0a2ed78e55d37211ca99948c053.scope - libcontainer container 2b5b2bf1f5bbf5ec557d6d4d9771d054e5e7a0a2ed78e55d37211ca99948c053. Dec 16 12:57:58.051394 systemd[1]: Started cri-containerd-c5fdf3b2f51aa8b14bdb29c300dfdf891e212cf5a8d253dfca84e4305829bdde.scope - libcontainer container c5fdf3b2f51aa8b14bdb29c300dfdf891e212cf5a8d253dfca84e4305829bdde. Dec 16 12:57:58.057000 audit: BPF prog-id=71 op=LOAD Dec 16 12:57:58.058000 audit: BPF prog-id=72 op=LOAD Dec 16 12:57:58.058000 audit[2524]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2493 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835376463356665303161653335616566633330353361663238303035 Dec 16 12:57:58.058000 audit: BPF prog-id=72 op=UNLOAD Dec 16 12:57:58.058000 audit[2524]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2493 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835376463356665303161653335616566633330353361663238303035 Dec 16 12:57:58.058000 audit: BPF prog-id=73 op=LOAD Dec 16 12:57:58.058000 audit[2524]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2493 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835376463356665303161653335616566633330353361663238303035 Dec 16 12:57:58.059000 audit: BPF prog-id=74 op=LOAD Dec 16 12:57:58.059000 audit[2524]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2493 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.059000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835376463356665303161653335616566633330353361663238303035 Dec 16 12:57:58.059000 audit: BPF prog-id=74 op=UNLOAD Dec 16 12:57:58.059000 audit[2524]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2493 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.059000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835376463356665303161653335616566633330353361663238303035 Dec 16 12:57:58.059000 audit: BPF prog-id=73 op=UNLOAD Dec 16 12:57:58.059000 audit[2524]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2493 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.059000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835376463356665303161653335616566633330353361663238303035 Dec 16 12:57:58.059000 audit: BPF prog-id=75 op=LOAD Dec 16 12:57:58.059000 audit[2524]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2493 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.059000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835376463356665303161653335616566633330353361663238303035 Dec 16 12:57:58.064000 audit: BPF prog-id=76 op=LOAD Dec 16 12:57:58.066000 audit: BPF prog-id=77 op=LOAD Dec 16 12:57:58.066000 audit[2528]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2495 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.066000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335666466336232663531616138623134626462323963333030646664 Dec 16 12:57:58.067000 audit: BPF prog-id=77 op=UNLOAD Dec 16 12:57:58.067000 audit[2528]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2495 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335666466336232663531616138623134626462323963333030646664 Dec 16 12:57:58.067000 audit: BPF prog-id=78 op=LOAD Dec 16 12:57:58.067000 audit[2528]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2495 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335666466336232663531616138623134626462323963333030646664 Dec 16 12:57:58.067000 audit: BPF prog-id=79 op=LOAD Dec 16 12:57:58.067000 audit[2528]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2495 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335666466336232663531616138623134626462323963333030646664 Dec 16 12:57:58.067000 audit: BPF prog-id=79 op=UNLOAD Dec 16 12:57:58.067000 audit[2528]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2495 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335666466336232663531616138623134626462323963333030646664 Dec 16 12:57:58.067000 audit: BPF prog-id=78 op=UNLOAD Dec 16 12:57:58.067000 audit[2528]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2495 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335666466336232663531616138623134626462323963333030646664 Dec 16 12:57:58.067000 audit: BPF prog-id=80 op=LOAD Dec 16 12:57:58.067000 audit[2528]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2495 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.067000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335666466336232663531616138623134626462323963333030646664 Dec 16 12:57:58.068000 audit: BPF prog-id=81 op=LOAD Dec 16 12:57:58.069000 audit: BPF prog-id=82 op=LOAD Dec 16 12:57:58.069000 audit[2534]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2490 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.069000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262356232626631663562626635656335353764366434643937373164 Dec 16 12:57:58.069000 audit: BPF prog-id=82 op=UNLOAD Dec 16 12:57:58.069000 audit[2534]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2490 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.069000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262356232626631663562626635656335353764366434643937373164 Dec 16 12:57:58.069000 audit: BPF prog-id=83 op=LOAD Dec 16 12:57:58.069000 audit[2534]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2490 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.069000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262356232626631663562626635656335353764366434643937373164 Dec 16 12:57:58.069000 audit: BPF prog-id=84 op=LOAD Dec 16 12:57:58.069000 audit[2534]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=2490 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.069000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262356232626631663562626635656335353764366434643937373164 Dec 16 12:57:58.069000 audit: BPF prog-id=84 op=UNLOAD Dec 16 12:57:58.069000 audit[2534]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2490 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.069000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262356232626631663562626635656335353764366434643937373164 Dec 16 12:57:58.069000 audit: BPF prog-id=83 op=UNLOAD Dec 16 12:57:58.069000 audit[2534]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2490 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.069000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262356232626631663562626635656335353764366434643937373164 Dec 16 12:57:58.070000 audit: BPF prog-id=85 op=LOAD Dec 16 12:57:58.070000 audit[2534]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=2490 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.070000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262356232626631663562626635656335353764366434643937373164 Dec 16 12:57:58.110738 containerd[1605]: time="2025-12-16T12:57:58.110631099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7bc3d8f172bc95168592c6e4c86a9b90,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b5b2bf1f5bbf5ec557d6d4d9771d054e5e7a0a2ed78e55d37211ca99948c053\"" Dec 16 12:57:58.112565 kubelet[2429]: E1216 12:57:58.112532 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:57:58.113899 containerd[1605]: time="2025-12-16T12:57:58.113856083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"857dc5fe01ae35aefc3053af28005f382296180520ff08a88492740748e05580\"" Dec 16 12:57:58.114333 kubelet[2429]: E1216 12:57:58.114310 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:57:58.115406 containerd[1605]: time="2025-12-16T12:57:58.115377446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5fdf3b2f51aa8b14bdb29c300dfdf891e212cf5a8d253dfca84e4305829bdde\"" Dec 16 12:57:58.115790 kubelet[2429]: E1216 12:57:58.115751 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:57:58.117125 containerd[1605]: time="2025-12-16T12:57:58.117092528Z" level=info msg="CreateContainer within sandbox \"2b5b2bf1f5bbf5ec557d6d4d9771d054e5e7a0a2ed78e55d37211ca99948c053\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 12:57:58.119933 containerd[1605]: time="2025-12-16T12:57:58.119650419Z" level=info msg="CreateContainer within sandbox \"857dc5fe01ae35aefc3053af28005f382296180520ff08a88492740748e05580\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 12:57:58.122078 containerd[1605]: time="2025-12-16T12:57:58.122056089Z" level=info msg="CreateContainer within sandbox \"c5fdf3b2f51aa8b14bdb29c300dfdf891e212cf5a8d253dfca84e4305829bdde\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 12:57:58.127615 containerd[1605]: time="2025-12-16T12:57:58.127564740Z" level=info msg="Container 6a7ceeffc787d2b4c83814330e3252f762ba7a9b8d1f234dd1cc957b0be48970: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:57:58.134907 containerd[1605]: time="2025-12-16T12:57:58.134769589Z" level=info msg="CreateContainer within sandbox \"2b5b2bf1f5bbf5ec557d6d4d9771d054e5e7a0a2ed78e55d37211ca99948c053\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6a7ceeffc787d2b4c83814330e3252f762ba7a9b8d1f234dd1cc957b0be48970\"" Dec 16 12:57:58.134907 containerd[1605]: time="2025-12-16T12:57:58.134785769Z" level=info msg="Container 5208ca9d8c6c2d80299fca046123581e41c9fca9afc45f6149704cd570bc9795: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:57:58.135489 containerd[1605]: time="2025-12-16T12:57:58.135459335Z" level=info msg="StartContainer for \"6a7ceeffc787d2b4c83814330e3252f762ba7a9b8d1f234dd1cc957b0be48970\"" Dec 16 12:57:58.136872 containerd[1605]: time="2025-12-16T12:57:58.136847403Z" level=info msg="connecting to shim 6a7ceeffc787d2b4c83814330e3252f762ba7a9b8d1f234dd1cc957b0be48970" address="unix:///run/containerd/s/e73027036320173c8c6bfa40141bd681590f67e4fbca89f4d203c5df101cc3a4" protocol=ttrpc version=3 Dec 16 12:57:58.140697 containerd[1605]: time="2025-12-16T12:57:58.140657273Z" level=info msg="Container d9ef1b31cc04e30d34850d8df4c17f54d09d764cf0b6b85c1d6cc555eecffb17: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:57:58.146814 containerd[1605]: time="2025-12-16T12:57:58.146760429Z" level=info msg="CreateContainer within sandbox \"857dc5fe01ae35aefc3053af28005f382296180520ff08a88492740748e05580\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5208ca9d8c6c2d80299fca046123581e41c9fca9afc45f6149704cd570bc9795\"" Dec 16 12:57:58.147262 containerd[1605]: time="2025-12-16T12:57:58.147232209Z" level=info msg="StartContainer for \"5208ca9d8c6c2d80299fca046123581e41c9fca9afc45f6149704cd570bc9795\"" Dec 16 12:57:58.150400 containerd[1605]: time="2025-12-16T12:57:58.150365688Z" level=info msg="connecting to shim 5208ca9d8c6c2d80299fca046123581e41c9fca9afc45f6149704cd570bc9795" address="unix:///run/containerd/s/72306d78735cdd864194ede544afb195d10f7d38c0ec08f7b5841c28d848951b" protocol=ttrpc version=3 Dec 16 12:57:58.153232 containerd[1605]: time="2025-12-16T12:57:58.153187302Z" level=info msg="CreateContainer within sandbox \"c5fdf3b2f51aa8b14bdb29c300dfdf891e212cf5a8d253dfca84e4305829bdde\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d9ef1b31cc04e30d34850d8df4c17f54d09d764cf0b6b85c1d6cc555eecffb17\"" Dec 16 12:57:58.154061 containerd[1605]: time="2025-12-16T12:57:58.153965267Z" level=info msg="StartContainer for \"d9ef1b31cc04e30d34850d8df4c17f54d09d764cf0b6b85c1d6cc555eecffb17\"" Dec 16 12:57:58.155280 containerd[1605]: time="2025-12-16T12:57:58.155253775Z" level=info msg="connecting to shim d9ef1b31cc04e30d34850d8df4c17f54d09d764cf0b6b85c1d6cc555eecffb17" address="unix:///run/containerd/s/304fb20cc66c06d917eae1e53f1d6f036e287cebe893dfb27e6af1e58d979b5b" protocol=ttrpc version=3 Dec 16 12:57:58.159290 systemd[1]: Started cri-containerd-6a7ceeffc787d2b4c83814330e3252f762ba7a9b8d1f234dd1cc957b0be48970.scope - libcontainer container 6a7ceeffc787d2b4c83814330e3252f762ba7a9b8d1f234dd1cc957b0be48970. Dec 16 12:57:58.177030 systemd[1]: Started cri-containerd-5208ca9d8c6c2d80299fca046123581e41c9fca9afc45f6149704cd570bc9795.scope - libcontainer container 5208ca9d8c6c2d80299fca046123581e41c9fca9afc45f6149704cd570bc9795. Dec 16 12:57:58.181546 systemd[1]: Started cri-containerd-d9ef1b31cc04e30d34850d8df4c17f54d09d764cf0b6b85c1d6cc555eecffb17.scope - libcontainer container d9ef1b31cc04e30d34850d8df4c17f54d09d764cf0b6b85c1d6cc555eecffb17. Dec 16 12:57:58.182000 audit: BPF prog-id=86 op=LOAD Dec 16 12:57:58.183000 audit: BPF prog-id=87 op=LOAD Dec 16 12:57:58.183000 audit[2606]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2490 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.183000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661376365656666633738376432623463383338313433333065333235 Dec 16 12:57:58.183000 audit: BPF prog-id=87 op=UNLOAD Dec 16 12:57:58.183000 audit[2606]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2490 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.183000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661376365656666633738376432623463383338313433333065333235 Dec 16 12:57:58.183000 audit: BPF prog-id=88 op=LOAD Dec 16 12:57:58.183000 audit[2606]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2490 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.183000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661376365656666633738376432623463383338313433333065333235 Dec 16 12:57:58.183000 audit: BPF prog-id=89 op=LOAD Dec 16 12:57:58.183000 audit[2606]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=2490 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.183000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661376365656666633738376432623463383338313433333065333235 Dec 16 12:57:58.183000 audit: BPF prog-id=89 op=UNLOAD Dec 16 12:57:58.183000 audit[2606]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2490 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.183000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661376365656666633738376432623463383338313433333065333235 Dec 16 12:57:58.183000 audit: BPF prog-id=88 op=UNLOAD Dec 16 12:57:58.183000 audit[2606]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2490 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.183000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661376365656666633738376432623463383338313433333065333235 Dec 16 12:57:58.183000 audit: BPF prog-id=90 op=LOAD Dec 16 12:57:58.183000 audit[2606]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=2490 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.183000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661376365656666633738376432623463383338313433333065333235 Dec 16 12:57:58.193000 audit: BPF prog-id=91 op=LOAD Dec 16 12:57:58.194000 audit: BPF prog-id=92 op=LOAD Dec 16 12:57:58.194000 audit[2618]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2493 pid=2618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532303863613964386336633264383032393966636130343631323335 Dec 16 12:57:58.194000 audit: BPF prog-id=92 op=UNLOAD Dec 16 12:57:58.194000 audit[2618]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2493 pid=2618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532303863613964386336633264383032393966636130343631323335 Dec 16 12:57:58.194000 audit: BPF prog-id=93 op=LOAD Dec 16 12:57:58.194000 audit[2618]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2493 pid=2618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532303863613964386336633264383032393966636130343631323335 Dec 16 12:57:58.194000 audit: BPF prog-id=94 op=LOAD Dec 16 12:57:58.194000 audit[2618]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=2493 pid=2618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532303863613964386336633264383032393966636130343631323335 Dec 16 12:57:58.194000 audit: BPF prog-id=94 op=UNLOAD Dec 16 12:57:58.194000 audit[2618]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2493 pid=2618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532303863613964386336633264383032393966636130343631323335 Dec 16 12:57:58.194000 audit: BPF prog-id=93 op=UNLOAD Dec 16 12:57:58.194000 audit[2618]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2493 pid=2618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532303863613964386336633264383032393966636130343631323335 Dec 16 12:57:58.194000 audit: BPF prog-id=95 op=LOAD Dec 16 12:57:58.194000 audit[2618]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=2493 pid=2618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532303863613964386336633264383032393966636130343631323335 Dec 16 12:57:58.195000 audit: BPF prog-id=96 op=LOAD Dec 16 12:57:58.196000 audit: BPF prog-id=97 op=LOAD Dec 16 12:57:58.196000 audit[2624]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2495 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439656631623331636330346533306433343835306438646634633137 Dec 16 12:57:58.196000 audit: BPF prog-id=97 op=UNLOAD Dec 16 12:57:58.196000 audit[2624]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2495 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439656631623331636330346533306433343835306438646634633137 Dec 16 12:57:58.196000 audit: BPF prog-id=98 op=LOAD Dec 16 12:57:58.196000 audit[2624]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2495 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439656631623331636330346533306433343835306438646634633137 Dec 16 12:57:58.196000 audit: BPF prog-id=99 op=LOAD Dec 16 12:57:58.196000 audit[2624]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2495 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439656631623331636330346533306433343835306438646634633137 Dec 16 12:57:58.196000 audit: BPF prog-id=99 op=UNLOAD Dec 16 12:57:58.196000 audit[2624]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2495 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439656631623331636330346533306433343835306438646634633137 Dec 16 12:57:58.196000 audit: BPF prog-id=98 op=UNLOAD Dec 16 12:57:58.196000 audit[2624]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2495 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439656631623331636330346533306433343835306438646634633137 Dec 16 12:57:58.197000 audit: BPF prog-id=100 op=LOAD Dec 16 12:57:58.197000 audit[2624]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2495 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:57:58.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439656631623331636330346533306433343835306438646634633137 Dec 16 12:57:58.217731 kubelet[2429]: E1216 12:57:58.216799 2429 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:57:58.235608 containerd[1605]: time="2025-12-16T12:57:58.235555303Z" level=info msg="StartContainer for \"6a7ceeffc787d2b4c83814330e3252f762ba7a9b8d1f234dd1cc957b0be48970\" returns successfully" Dec 16 12:57:58.237528 containerd[1605]: time="2025-12-16T12:57:58.237501447Z" level=info msg="StartContainer for \"5208ca9d8c6c2d80299fca046123581e41c9fca9afc45f6149704cd570bc9795\" returns successfully" Dec 16 12:57:58.238456 kubelet[2429]: E1216 12:57:58.238414 2429 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.102:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:57:58.486706 containerd[1605]: time="2025-12-16T12:57:58.486649241Z" level=info msg="StartContainer for \"d9ef1b31cc04e30d34850d8df4c17f54d09d764cf0b6b85c1d6cc555eecffb17\" returns successfully" Dec 16 12:57:58.545690 kubelet[2429]: I1216 12:57:58.545635 2429 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:57:59.015982 kubelet[2429]: E1216 12:57:59.015936 2429 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:57:59.016114 kubelet[2429]: E1216 12:57:59.016083 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:57:59.017567 kubelet[2429]: E1216 12:57:59.017536 2429 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:57:59.017803 kubelet[2429]: E1216 12:57:59.017774 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:57:59.021657 kubelet[2429]: E1216 12:57:59.021619 2429 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:57:59.021815 kubelet[2429]: E1216 12:57:59.021788 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:57:59.515119 kubelet[2429]: E1216 12:57:59.515045 2429 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 16 12:57:59.616174 kubelet[2429]: I1216 12:57:59.616117 2429 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 12:57:59.616174 kubelet[2429]: E1216 12:57:59.616165 2429 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Dec 16 12:57:59.632208 kubelet[2429]: E1216 12:57:59.632146 2429 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:57:59.732661 kubelet[2429]: E1216 12:57:59.732578 2429 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:57:59.833504 kubelet[2429]: E1216 12:57:59.833351 2429 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:57:59.934101 kubelet[2429]: E1216 12:57:59.934033 2429 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:58:00.022303 kubelet[2429]: E1216 12:58:00.022272 2429 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:58:00.022404 kubelet[2429]: E1216 12:58:00.022383 2429 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 12:58:00.022404 kubelet[2429]: E1216 12:58:00.022391 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:00.022484 kubelet[2429]: E1216 12:58:00.022463 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:00.034402 kubelet[2429]: E1216 12:58:00.034370 2429 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:58:00.135365 kubelet[2429]: E1216 12:58:00.135237 2429 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:58:00.235704 kubelet[2429]: E1216 12:58:00.235651 2429 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:58:00.336422 kubelet[2429]: E1216 12:58:00.336377 2429 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:58:00.437463 kubelet[2429]: E1216 12:58:00.437350 2429 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:58:00.538394 kubelet[2429]: E1216 12:58:00.538345 2429 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:58:00.638844 kubelet[2429]: E1216 12:58:00.638797 2429 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:58:00.739561 kubelet[2429]: E1216 12:58:00.739529 2429 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:58:00.872025 kubelet[2429]: I1216 12:58:00.871976 2429 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:58:00.879653 kubelet[2429]: I1216 12:58:00.879618 2429 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:58:00.883485 kubelet[2429]: I1216 12:58:00.883401 2429 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 12:58:00.958186 kubelet[2429]: I1216 12:58:00.958129 2429 apiserver.go:52] "Watching apiserver" Dec 16 12:58:00.960342 kubelet[2429]: E1216 12:58:00.960298 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:00.971695 kubelet[2429]: I1216 12:58:00.971659 2429 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:58:01.023941 kubelet[2429]: E1216 12:58:01.023802 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:01.024671 kubelet[2429]: E1216 12:58:01.024116 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:02.025531 kubelet[2429]: E1216 12:58:02.025484 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:02.026008 kubelet[2429]: E1216 12:58:02.025493 2429 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:02.359637 systemd[1]: Reload requested from client PID 2715 ('systemctl') (unit session-7.scope)... Dec 16 12:58:02.359661 systemd[1]: Reloading... Dec 16 12:58:02.444861 zram_generator::config[2764]: No configuration found. Dec 16 12:58:03.368847 systemd[1]: Reloading finished in 1008 ms. Dec 16 12:58:03.404632 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:58:03.417537 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 12:58:03.418010 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:58:03.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:58:03.418101 systemd[1]: kubelet.service: Consumed 1.032s CPU time, 131.2M memory peak. Dec 16 12:58:03.419456 kernel: kauditd_printk_skb: 204 callbacks suppressed Dec 16 12:58:03.419522 kernel: audit: type=1131 audit(1765889883.417:364): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:58:03.420494 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:58:03.425179 kernel: audit: type=1334 audit(1765889883.419:365): prog-id=101 op=LOAD Dec 16 12:58:03.419000 audit: BPF prog-id=101 op=LOAD Dec 16 12:58:03.419000 audit: BPF prog-id=64 op=UNLOAD Dec 16 12:58:03.426630 kernel: audit: type=1334 audit(1765889883.419:366): prog-id=64 op=UNLOAD Dec 16 12:58:03.426692 kernel: audit: type=1334 audit(1765889883.419:367): prog-id=102 op=LOAD Dec 16 12:58:03.426721 kernel: audit: type=1334 audit(1765889883.419:368): prog-id=103 op=LOAD Dec 16 12:58:03.426744 kernel: audit: type=1334 audit(1765889883.419:369): prog-id=65 op=UNLOAD Dec 16 12:58:03.426770 kernel: audit: type=1334 audit(1765889883.419:370): prog-id=66 op=UNLOAD Dec 16 12:58:03.426797 kernel: audit: type=1334 audit(1765889883.419:371): prog-id=104 op=LOAD Dec 16 12:58:03.426855 kernel: audit: type=1334 audit(1765889883.419:372): prog-id=105 op=LOAD Dec 16 12:58:03.426891 kernel: audit: type=1334 audit(1765889883.419:373): prog-id=58 op=UNLOAD Dec 16 12:58:03.419000 audit: BPF prog-id=102 op=LOAD Dec 16 12:58:03.419000 audit: BPF prog-id=103 op=LOAD Dec 16 12:58:03.419000 audit: BPF prog-id=65 op=UNLOAD Dec 16 12:58:03.419000 audit: BPF prog-id=66 op=UNLOAD Dec 16 12:58:03.419000 audit: BPF prog-id=104 op=LOAD Dec 16 12:58:03.419000 audit: BPF prog-id=105 op=LOAD Dec 16 12:58:03.419000 audit: BPF prog-id=58 op=UNLOAD Dec 16 12:58:03.419000 audit: BPF prog-id=59 op=UNLOAD Dec 16 12:58:03.423000 audit: BPF prog-id=106 op=LOAD Dec 16 12:58:03.423000 audit: BPF prog-id=55 op=UNLOAD Dec 16 12:58:03.423000 audit: BPF prog-id=107 op=LOAD Dec 16 12:58:03.423000 audit: BPF prog-id=108 op=LOAD Dec 16 12:58:03.423000 audit: BPF prog-id=56 op=UNLOAD Dec 16 12:58:03.423000 audit: BPF prog-id=57 op=UNLOAD Dec 16 12:58:03.425000 audit: BPF prog-id=109 op=LOAD Dec 16 12:58:03.425000 audit: BPF prog-id=54 op=UNLOAD Dec 16 12:58:03.426000 audit: BPF prog-id=110 op=LOAD Dec 16 12:58:03.426000 audit: BPF prog-id=70 op=UNLOAD Dec 16 12:58:03.428000 audit: BPF prog-id=111 op=LOAD Dec 16 12:58:03.428000 audit: BPF prog-id=61 op=UNLOAD Dec 16 12:58:03.428000 audit: BPF prog-id=112 op=LOAD Dec 16 12:58:03.428000 audit: BPF prog-id=113 op=LOAD Dec 16 12:58:03.428000 audit: BPF prog-id=62 op=UNLOAD Dec 16 12:58:03.428000 audit: BPF prog-id=63 op=UNLOAD Dec 16 12:58:03.429000 audit: BPF prog-id=114 op=LOAD Dec 16 12:58:03.429000 audit: BPF prog-id=60 op=UNLOAD Dec 16 12:58:03.429000 audit: BPF prog-id=115 op=LOAD Dec 16 12:58:03.431000 audit: BPF prog-id=67 op=UNLOAD Dec 16 12:58:03.431000 audit: BPF prog-id=116 op=LOAD Dec 16 12:58:03.431000 audit: BPF prog-id=117 op=LOAD Dec 16 12:58:03.431000 audit: BPF prog-id=68 op=UNLOAD Dec 16 12:58:03.431000 audit: BPF prog-id=69 op=UNLOAD Dec 16 12:58:03.433000 audit: BPF prog-id=118 op=LOAD Dec 16 12:58:03.433000 audit: BPF prog-id=51 op=UNLOAD Dec 16 12:58:03.433000 audit: BPF prog-id=119 op=LOAD Dec 16 12:58:03.433000 audit: BPF prog-id=120 op=LOAD Dec 16 12:58:03.433000 audit: BPF prog-id=52 op=UNLOAD Dec 16 12:58:03.433000 audit: BPF prog-id=53 op=UNLOAD Dec 16 12:58:03.690986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:58:03.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:58:03.695338 (kubelet)[2806]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:58:03.742024 kubelet[2806]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:58:03.742024 kubelet[2806]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:58:03.742024 kubelet[2806]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:58:03.742463 kubelet[2806]: I1216 12:58:03.742136 2806 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:58:03.749698 kubelet[2806]: I1216 12:58:03.749653 2806 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 12:58:03.749698 kubelet[2806]: I1216 12:58:03.749678 2806 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:58:03.749932 kubelet[2806]: I1216 12:58:03.749906 2806 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:58:03.751114 kubelet[2806]: I1216 12:58:03.751088 2806 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 12:58:03.753537 kubelet[2806]: I1216 12:58:03.753487 2806 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:58:03.757460 kubelet[2806]: I1216 12:58:03.757434 2806 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:58:03.762086 kubelet[2806]: I1216 12:58:03.762063 2806 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:58:03.762348 kubelet[2806]: I1216 12:58:03.762322 2806 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:58:03.762520 kubelet[2806]: I1216 12:58:03.762346 2806 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:58:03.762636 kubelet[2806]: I1216 12:58:03.762523 2806 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:58:03.762636 kubelet[2806]: I1216 12:58:03.762536 2806 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 12:58:03.762636 kubelet[2806]: I1216 12:58:03.762583 2806 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:58:03.762796 kubelet[2806]: I1216 12:58:03.762779 2806 kubelet.go:480] "Attempting to sync node with API server" Dec 16 12:58:03.762796 kubelet[2806]: I1216 12:58:03.762794 2806 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:58:03.762895 kubelet[2806]: I1216 12:58:03.762843 2806 kubelet.go:386] "Adding apiserver pod source" Dec 16 12:58:03.762895 kubelet[2806]: I1216 12:58:03.762856 2806 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:58:03.764211 kubelet[2806]: I1216 12:58:03.764185 2806 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 12:58:03.764932 kubelet[2806]: I1216 12:58:03.764899 2806 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:58:03.769872 kubelet[2806]: I1216 12:58:03.769413 2806 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:58:03.769872 kubelet[2806]: I1216 12:58:03.769456 2806 server.go:1289] "Started kubelet" Dec 16 12:58:03.770359 kubelet[2806]: I1216 12:58:03.770266 2806 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:58:03.770836 kubelet[2806]: I1216 12:58:03.770798 2806 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:58:03.770893 kubelet[2806]: I1216 12:58:03.770872 2806 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:58:03.771483 kubelet[2806]: I1216 12:58:03.771450 2806 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:58:03.772253 kubelet[2806]: I1216 12:58:03.772196 2806 server.go:317] "Adding debug handlers to kubelet server" Dec 16 12:58:03.775660 kubelet[2806]: I1216 12:58:03.775632 2806 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:58:03.778371 kubelet[2806]: I1216 12:58:03.778352 2806 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:58:03.778812 kubelet[2806]: E1216 12:58:03.778745 2806 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 12:58:03.779393 kubelet[2806]: I1216 12:58:03.779377 2806 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:58:03.779593 kubelet[2806]: I1216 12:58:03.779580 2806 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:58:03.785803 kubelet[2806]: E1216 12:58:03.785764 2806 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:58:03.786157 kubelet[2806]: I1216 12:58:03.786136 2806 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:58:03.786157 kubelet[2806]: I1216 12:58:03.786155 2806 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:58:03.786272 kubelet[2806]: I1216 12:58:03.786243 2806 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:58:03.795170 kubelet[2806]: I1216 12:58:03.793335 2806 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 12:58:03.795170 kubelet[2806]: I1216 12:58:03.795159 2806 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 12:58:03.795170 kubelet[2806]: I1216 12:58:03.795177 2806 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 12:58:03.795406 kubelet[2806]: I1216 12:58:03.795201 2806 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:58:03.795406 kubelet[2806]: I1216 12:58:03.795210 2806 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 12:58:03.795406 kubelet[2806]: E1216 12:58:03.795255 2806 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:58:03.847551 kubelet[2806]: I1216 12:58:03.846595 2806 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:58:03.847551 kubelet[2806]: I1216 12:58:03.846618 2806 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:58:03.847551 kubelet[2806]: I1216 12:58:03.846640 2806 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:58:03.847551 kubelet[2806]: I1216 12:58:03.846770 2806 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 12:58:03.847551 kubelet[2806]: I1216 12:58:03.846780 2806 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 12:58:03.847551 kubelet[2806]: I1216 12:58:03.846797 2806 policy_none.go:49] "None policy: Start" Dec 16 12:58:03.847551 kubelet[2806]: I1216 12:58:03.846808 2806 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:58:03.847551 kubelet[2806]: I1216 12:58:03.846835 2806 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:58:03.847551 kubelet[2806]: I1216 12:58:03.846923 2806 state_mem.go:75] "Updated machine memory state" Dec 16 12:58:03.851447 kubelet[2806]: E1216 12:58:03.851426 2806 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:58:03.851637 kubelet[2806]: I1216 12:58:03.851621 2806 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:58:03.851667 kubelet[2806]: I1216 12:58:03.851635 2806 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:58:03.852029 kubelet[2806]: I1216 12:58:03.852013 2806 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:58:03.853396 kubelet[2806]: E1216 12:58:03.853360 2806 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:58:03.897250 kubelet[2806]: I1216 12:58:03.897198 2806 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:58:03.897571 kubelet[2806]: I1216 12:58:03.897211 2806 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:58:03.897804 kubelet[2806]: I1216 12:58:03.897783 2806 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 12:58:03.904772 kubelet[2806]: E1216 12:58:03.904661 2806 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:58:03.905248 kubelet[2806]: E1216 12:58:03.905224 2806 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 16 12:58:03.905335 kubelet[2806]: E1216 12:58:03.905249 2806 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 16 12:58:03.957314 kubelet[2806]: I1216 12:58:03.957198 2806 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 12:58:03.981260 kubelet[2806]: I1216 12:58:03.981197 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7bc3d8f172bc95168592c6e4c86a9b90-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7bc3d8f172bc95168592c6e4c86a9b90\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:58:03.981260 kubelet[2806]: I1216 12:58:03.981253 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7bc3d8f172bc95168592c6e4c86a9b90-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7bc3d8f172bc95168592c6e4c86a9b90\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:58:03.981419 kubelet[2806]: I1216 12:58:03.981288 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:58:03.981419 kubelet[2806]: I1216 12:58:03.981311 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:58:03.981419 kubelet[2806]: I1216 12:58:03.981337 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:58:03.981419 kubelet[2806]: I1216 12:58:03.981362 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:58:03.981419 kubelet[2806]: I1216 12:58:03.981385 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 16 12:58:03.981589 kubelet[2806]: I1216 12:58:03.981408 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7bc3d8f172bc95168592c6e4c86a9b90-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7bc3d8f172bc95168592c6e4c86a9b90\") " pod="kube-system/kube-apiserver-localhost" Dec 16 12:58:03.981589 kubelet[2806]: I1216 12:58:03.981432 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 12:58:04.179302 update_engine[1585]: I20251216 12:58:04.179203 1585 update_attempter.cc:509] Updating boot flags... Dec 16 12:58:04.205222 kubelet[2806]: E1216 12:58:04.205154 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:04.206352 kubelet[2806]: E1216 12:58:04.206243 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:04.206352 kubelet[2806]: E1216 12:58:04.206275 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:04.276635 kubelet[2806]: I1216 12:58:04.276578 2806 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 16 12:58:04.276813 kubelet[2806]: I1216 12:58:04.276668 2806 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 12:58:04.764273 kubelet[2806]: I1216 12:58:04.764213 2806 apiserver.go:52] "Watching apiserver" Dec 16 12:58:04.779942 kubelet[2806]: I1216 12:58:04.779892 2806 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:58:04.821336 kubelet[2806]: I1216 12:58:04.821303 2806 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:58:04.821467 kubelet[2806]: I1216 12:58:04.821374 2806 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 12:58:04.821593 kubelet[2806]: I1216 12:58:04.821558 2806 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 12:58:04.920064 kubelet[2806]: E1216 12:58:04.918748 2806 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 16 12:58:04.920064 kubelet[2806]: E1216 12:58:04.918968 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:04.920064 kubelet[2806]: E1216 12:58:04.919279 2806 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 16 12:58:04.920064 kubelet[2806]: E1216 12:58:04.919381 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:04.920064 kubelet[2806]: E1216 12:58:04.919449 2806 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 16 12:58:04.920064 kubelet[2806]: E1216 12:58:04.919547 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:04.930019 kubelet[2806]: I1216 12:58:04.929861 2806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.929768064 podStartE2EDuration="4.929768064s" podCreationTimestamp="2025-12-16 12:58:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:58:04.929620374 +0000 UTC m=+1.222758858" watchObservedRunningTime="2025-12-16 12:58:04.929768064 +0000 UTC m=+1.222906558" Dec 16 12:58:04.930154 kubelet[2806]: I1216 12:58:04.930095 2806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.930060279 podStartE2EDuration="4.930060279s" podCreationTimestamp="2025-12-16 12:58:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:58:04.9185728 +0000 UTC m=+1.211711294" watchObservedRunningTime="2025-12-16 12:58:04.930060279 +0000 UTC m=+1.223198773" Dec 16 12:58:04.941583 kubelet[2806]: I1216 12:58:04.941221 2806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.941202322 podStartE2EDuration="4.941202322s" podCreationTimestamp="2025-12-16 12:58:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:58:04.940984158 +0000 UTC m=+1.234122662" watchObservedRunningTime="2025-12-16 12:58:04.941202322 +0000 UTC m=+1.234340816" Dec 16 12:58:05.823026 kubelet[2806]: E1216 12:58:05.822974 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:05.823465 kubelet[2806]: E1216 12:58:05.823159 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:05.823465 kubelet[2806]: E1216 12:58:05.823170 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:06.825154 kubelet[2806]: E1216 12:58:06.824855 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:06.825154 kubelet[2806]: E1216 12:58:06.824855 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:07.177438 kubelet[2806]: I1216 12:58:07.177332 2806 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 12:58:07.177673 containerd[1605]: time="2025-12-16T12:58:07.177637231Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 12:58:07.178064 kubelet[2806]: I1216 12:58:07.177897 2806 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 12:58:08.058962 systemd[1]: Created slice kubepods-besteffort-pod75714a60_0bb2_4927_a72c_1c8e562acb59.slice - libcontainer container kubepods-besteffort-pod75714a60_0bb2_4927_a72c_1c8e562acb59.slice. Dec 16 12:58:08.102710 kubelet[2806]: I1216 12:58:08.102673 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75714a60-0bb2-4927-a72c-1c8e562acb59-lib-modules\") pod \"kube-proxy-bmjrk\" (UID: \"75714a60-0bb2-4927-a72c-1c8e562acb59\") " pod="kube-system/kube-proxy-bmjrk" Dec 16 12:58:08.102710 kubelet[2806]: I1216 12:58:08.102712 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/75714a60-0bb2-4927-a72c-1c8e562acb59-kube-proxy\") pod \"kube-proxy-bmjrk\" (UID: \"75714a60-0bb2-4927-a72c-1c8e562acb59\") " pod="kube-system/kube-proxy-bmjrk" Dec 16 12:58:08.103173 kubelet[2806]: I1216 12:58:08.102737 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75714a60-0bb2-4927-a72c-1c8e562acb59-xtables-lock\") pod \"kube-proxy-bmjrk\" (UID: \"75714a60-0bb2-4927-a72c-1c8e562acb59\") " pod="kube-system/kube-proxy-bmjrk" Dec 16 12:58:08.103173 kubelet[2806]: I1216 12:58:08.102813 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8fkm\" (UniqueName: \"kubernetes.io/projected/75714a60-0bb2-4927-a72c-1c8e562acb59-kube-api-access-g8fkm\") pod \"kube-proxy-bmjrk\" (UID: \"75714a60-0bb2-4927-a72c-1c8e562acb59\") " pod="kube-system/kube-proxy-bmjrk" Dec 16 12:58:08.653209 systemd[1]: Created slice kubepods-besteffort-pod086930ad_c25a_49e6_86c1_3c0efafa2124.slice - libcontainer container kubepods-besteffort-pod086930ad_c25a_49e6_86c1_3c0efafa2124.slice. Dec 16 12:58:08.671301 kubelet[2806]: E1216 12:58:08.671249 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:08.672189 containerd[1605]: time="2025-12-16T12:58:08.672111637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bmjrk,Uid:75714a60-0bb2-4927-a72c-1c8e562acb59,Namespace:kube-system,Attempt:0,}" Dec 16 12:58:08.706089 kubelet[2806]: I1216 12:58:08.706016 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mjjv\" (UniqueName: \"kubernetes.io/projected/086930ad-c25a-49e6-86c1-3c0efafa2124-kube-api-access-2mjjv\") pod \"tigera-operator-7dcd859c48-747pb\" (UID: \"086930ad-c25a-49e6-86c1-3c0efafa2124\") " pod="tigera-operator/tigera-operator-7dcd859c48-747pb" Dec 16 12:58:08.706089 kubelet[2806]: I1216 12:58:08.706080 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/086930ad-c25a-49e6-86c1-3c0efafa2124-var-lib-calico\") pod \"tigera-operator-7dcd859c48-747pb\" (UID: \"086930ad-c25a-49e6-86c1-3c0efafa2124\") " pod="tigera-operator/tigera-operator-7dcd859c48-747pb" Dec 16 12:58:08.717443 containerd[1605]: time="2025-12-16T12:58:08.717390761Z" level=info msg="connecting to shim 4fb330f69ff4a105a5d23563f213dcd57b15d0918e3a6769a65cfc86392089f2" address="unix:///run/containerd/s/e1f5141a66d27046322c22d397fcaa0e89cbba99b218e38d93dc084a43eb4ee2" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:58:08.773982 systemd[1]: Started cri-containerd-4fb330f69ff4a105a5d23563f213dcd57b15d0918e3a6769a65cfc86392089f2.scope - libcontainer container 4fb330f69ff4a105a5d23563f213dcd57b15d0918e3a6769a65cfc86392089f2. Dec 16 12:58:08.783000 audit: BPF prog-id=121 op=LOAD Dec 16 12:58:08.785092 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 16 12:58:08.785155 kernel: audit: type=1334 audit(1765889888.783:406): prog-id=121 op=LOAD Dec 16 12:58:08.783000 audit: BPF prog-id=122 op=LOAD Dec 16 12:58:08.787731 kernel: audit: type=1334 audit(1765889888.783:407): prog-id=122 op=LOAD Dec 16 12:58:08.787768 kernel: audit: type=1300 audit(1765889888.783:407): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2887 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:08.783000 audit[2898]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=2887 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:08.783000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466623333306636396666346131303561356432333536336632313364 Dec 16 12:58:08.799057 kernel: audit: type=1327 audit(1765889888.783:407): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466623333306636396666346131303561356432333536336632313364 Dec 16 12:58:08.783000 audit: BPF prog-id=122 op=UNLOAD Dec 16 12:58:08.783000 audit[2898]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2887 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:08.808357 kernel: audit: type=1334 audit(1765889888.783:408): prog-id=122 op=UNLOAD Dec 16 12:58:08.808431 kernel: audit: type=1300 audit(1765889888.783:408): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2887 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:08.783000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466623333306636396666346131303561356432333536336632313364 Dec 16 12:58:08.814524 kernel: audit: type=1327 audit(1765889888.783:408): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466623333306636396666346131303561356432333536336632313364 Dec 16 12:58:08.816343 kernel: audit: type=1334 audit(1765889888.783:409): prog-id=123 op=LOAD Dec 16 12:58:08.783000 audit: BPF prog-id=123 op=LOAD Dec 16 12:58:08.783000 audit[2898]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2887 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:08.822638 kernel: audit: type=1300 audit(1765889888.783:409): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2887 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:08.822715 kernel: audit: type=1327 audit(1765889888.783:409): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466623333306636396666346131303561356432333536336632313364 Dec 16 12:58:08.783000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466623333306636396666346131303561356432333536336632313364 Dec 16 12:58:08.783000 audit: BPF prog-id=124 op=LOAD Dec 16 12:58:08.783000 audit[2898]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=2887 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:08.783000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466623333306636396666346131303561356432333536336632313364 Dec 16 12:58:08.783000 audit: BPF prog-id=124 op=UNLOAD Dec 16 12:58:08.783000 audit[2898]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2887 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:08.783000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466623333306636396666346131303561356432333536336632313364 Dec 16 12:58:08.783000 audit: BPF prog-id=123 op=UNLOAD Dec 16 12:58:08.783000 audit[2898]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2887 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:08.783000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466623333306636396666346131303561356432333536336632313364 Dec 16 12:58:08.783000 audit: BPF prog-id=125 op=LOAD Dec 16 12:58:08.783000 audit[2898]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=2887 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:08.783000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466623333306636396666346131303561356432333536336632313364 Dec 16 12:58:08.830237 containerd[1605]: time="2025-12-16T12:58:08.830196222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bmjrk,Uid:75714a60-0bb2-4927-a72c-1c8e562acb59,Namespace:kube-system,Attempt:0,} returns sandbox id \"4fb330f69ff4a105a5d23563f213dcd57b15d0918e3a6769a65cfc86392089f2\"" Dec 16 12:58:08.830966 kubelet[2806]: E1216 12:58:08.830935 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:08.836520 containerd[1605]: time="2025-12-16T12:58:08.836383913Z" level=info msg="CreateContainer within sandbox \"4fb330f69ff4a105a5d23563f213dcd57b15d0918e3a6769a65cfc86392089f2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 12:58:08.849537 containerd[1605]: time="2025-12-16T12:58:08.849486290Z" level=info msg="Container 2d2672114b02d73db9b98d113755ed08e917ccfd631f06924cf585a1729b34c0: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:58:08.857733 containerd[1605]: time="2025-12-16T12:58:08.857684225Z" level=info msg="CreateContainer within sandbox \"4fb330f69ff4a105a5d23563f213dcd57b15d0918e3a6769a65cfc86392089f2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2d2672114b02d73db9b98d113755ed08e917ccfd631f06924cf585a1729b34c0\"" Dec 16 12:58:08.858315 containerd[1605]: time="2025-12-16T12:58:08.858290251Z" level=info msg="StartContainer for \"2d2672114b02d73db9b98d113755ed08e917ccfd631f06924cf585a1729b34c0\"" Dec 16 12:58:08.859540 containerd[1605]: time="2025-12-16T12:58:08.859507345Z" level=info msg="connecting to shim 2d2672114b02d73db9b98d113755ed08e917ccfd631f06924cf585a1729b34c0" address="unix:///run/containerd/s/e1f5141a66d27046322c22d397fcaa0e89cbba99b218e38d93dc084a43eb4ee2" protocol=ttrpc version=3 Dec 16 12:58:08.878987 systemd[1]: Started cri-containerd-2d2672114b02d73db9b98d113755ed08e917ccfd631f06924cf585a1729b34c0.scope - libcontainer container 2d2672114b02d73db9b98d113755ed08e917ccfd631f06924cf585a1729b34c0. Dec 16 12:58:08.941000 audit: BPF prog-id=126 op=LOAD Dec 16 12:58:08.941000 audit[2923]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=2887 pid=2923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:08.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264323637323131346230326437336462396239386431313337353565 Dec 16 12:58:08.941000 audit: BPF prog-id=127 op=LOAD Dec 16 12:58:08.941000 audit[2923]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=2887 pid=2923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:08.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264323637323131346230326437336462396239386431313337353565 Dec 16 12:58:08.941000 audit: BPF prog-id=127 op=UNLOAD Dec 16 12:58:08.941000 audit[2923]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2887 pid=2923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:08.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264323637323131346230326437336462396239386431313337353565 Dec 16 12:58:08.941000 audit: BPF prog-id=126 op=UNLOAD Dec 16 12:58:08.941000 audit[2923]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2887 pid=2923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:08.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264323637323131346230326437336462396239386431313337353565 Dec 16 12:58:08.941000 audit: BPF prog-id=128 op=LOAD Dec 16 12:58:08.941000 audit[2923]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=2887 pid=2923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:08.941000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3264323637323131346230326437336462396239386431313337353565 Dec 16 12:58:08.958331 containerd[1605]: time="2025-12-16T12:58:08.958296660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-747pb,Uid:086930ad-c25a-49e6-86c1-3c0efafa2124,Namespace:tigera-operator,Attempt:0,}" Dec 16 12:58:09.125000 audit[2990]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=2990 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:58:09.125000 audit[2990]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe651dfac0 a2=0 a3=7ffe651dfaac items=0 ppid=2937 pid=2990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.125000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 12:58:09.126000 audit[2991]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=2991 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:58:09.126000 audit[2991]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe28bbeea0 a2=0 a3=7ffe28bbee8c items=0 ppid=2937 pid=2991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.126000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 12:58:09.127000 audit[2994]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=2994 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:58:09.127000 audit[2994]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe01a900f0 a2=0 a3=7ffe01a900dc items=0 ppid=2937 pid=2994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.127000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 12:58:09.130000 audit[2995]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=2995 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:58:09.130000 audit[2995]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd184a0a0 a2=0 a3=7fffd184a08c items=0 ppid=2937 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.130000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 12:58:09.131000 audit[2996]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_chain pid=2996 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:58:09.131000 audit[2996]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd7738fc70 a2=0 a3=7ffd7738fc5c items=0 ppid=2937 pid=2996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.131000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 12:58:09.132000 audit[2998]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=2998 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:58:09.132000 audit[2998]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdf3681110 a2=0 a3=7ffdf36810fc items=0 ppid=2937 pid=2998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.132000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 12:58:09.195470 containerd[1605]: time="2025-12-16T12:58:09.195374425Z" level=info msg="StartContainer for \"2d2672114b02d73db9b98d113755ed08e917ccfd631f06924cf585a1729b34c0\" returns successfully" Dec 16 12:58:09.226000 audit[2999]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=2999 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:58:09.226000 audit[2999]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffff8fa83d0 a2=0 a3=7ffff8fa83bc items=0 ppid=2937 pid=2999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.226000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 12:58:09.230000 audit[3001]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3001 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:58:09.230000 audit[3001]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff4890ca60 a2=0 a3=7fff4890ca4c items=0 ppid=2937 pid=3001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.230000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 16 12:58:09.236000 audit[3004]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3004 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:58:09.236000 audit[3004]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffdf20efbc0 a2=0 a3=7ffdf20efbac items=0 ppid=2937 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.236000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 16 12:58:09.238000 audit[3005]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3005 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:58:09.238000 audit[3005]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc1fd4f180 a2=0 a3=7ffc1fd4f16c items=0 ppid=2937 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.238000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 12:58:09.241000 audit[3007]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3007 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:58:09.241000 audit[3007]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffff64a6260 a2=0 a3=7ffff64a624c items=0 ppid=2937 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.241000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 12:58:09.242000 audit[3008]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3008 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:58:09.242000 audit[3008]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe005cf2b0 a2=0 a3=7ffe005cf29c items=0 ppid=2937 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.242000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 12:58:09.246000 audit[3010]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3010 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:58:09.246000 audit[3010]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff5bf9df30 a2=0 a3=7fff5bf9df1c items=0 ppid=2937 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.246000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 12:58:09.250000 audit[3013]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3013 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:58:09.250000 audit[3013]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffc83bd3b0 a2=0 a3=7fffc83bd39c items=0 ppid=2937 pid=3013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.250000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 16 12:58:09.252000 audit[3014]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3014 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:58:09.252000 audit[3014]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc51cc53b0 a2=0 a3=7ffc51cc539c items=0 ppid=2937 pid=3014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.252000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 12:58:09.255000 audit[3016]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3016 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:58:09.255000 audit[3016]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcf1e53380 a2=0 a3=7ffcf1e5336c items=0 ppid=2937 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.255000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 12:58:09.257000 audit[3017]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3017 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:58:09.257000 audit[3017]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd7f3a4740 a2=0 a3=7ffd7f3a472c items=0 ppid=2937 pid=3017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.257000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 12:58:09.260000 audit[3019]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3019 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:58:09.260000 audit[3019]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe5e5cac20 a2=0 a3=7ffe5e5cac0c items=0 ppid=2937 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.260000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 12:58:09.264000 audit[3022]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3022 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:58:09.264000 audit[3022]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd28f21200 a2=0 a3=7ffd28f211ec items=0 ppid=2937 pid=3022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.264000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 12:58:09.269000 audit[3025]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3025 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:58:09.269000 audit[3025]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcc0f2d090 a2=0 a3=7ffcc0f2d07c items=0 ppid=2937 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.269000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 12:58:09.270000 audit[3026]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3026 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:58:09.270000 audit[3026]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc09e928c0 a2=0 a3=7ffc09e928ac items=0 ppid=2937 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.270000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 12:58:09.273000 audit[3028]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:58:09.273000 audit[3028]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd7db0a320 a2=0 a3=7ffd7db0a30c items=0 ppid=2937 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.273000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:58:09.278000 audit[3031]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3031 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:58:09.278000 audit[3031]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffea7872be0 a2=0 a3=7ffea7872bcc items=0 ppid=2937 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.278000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:58:09.279000 audit[3032]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3032 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:58:09.279000 audit[3032]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd97107ca0 a2=0 a3=7ffd97107c8c items=0 ppid=2937 pid=3032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.279000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 12:58:09.282000 audit[3034]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3034 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 12:58:09.282000 audit[3034]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc7e49f930 a2=0 a3=7ffc7e49f91c items=0 ppid=2937 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.282000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 12:58:09.304000 audit[3040]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3040 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:09.304000 audit[3040]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffef02c9150 a2=0 a3=7ffef02c913c items=0 ppid=2937 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.304000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:09.315000 audit[3040]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3040 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:09.315000 audit[3040]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffef02c9150 a2=0 a3=7ffef02c913c items=0 ppid=2937 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.315000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:09.317000 audit[3045]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3045 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:58:09.317000 audit[3045]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff1508ebe0 a2=0 a3=7fff1508ebcc items=0 ppid=2937 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.317000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 12:58:09.320000 audit[3047]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3047 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:58:09.320000 audit[3047]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd44f7fac0 a2=0 a3=7ffd44f7faac items=0 ppid=2937 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.320000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 16 12:58:09.325000 audit[3050]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3050 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:58:09.325000 audit[3050]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffcbd15b80 a2=0 a3=7fffcbd15b6c items=0 ppid=2937 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.325000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 16 12:58:09.327000 audit[3051]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3051 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:58:09.327000 audit[3051]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe63f35a80 a2=0 a3=7ffe63f35a6c items=0 ppid=2937 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.327000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 12:58:09.330000 audit[3053]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3053 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:58:09.330000 audit[3053]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff31475c10 a2=0 a3=7fff31475bfc items=0 ppid=2937 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.330000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 12:58:09.331000 audit[3054]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3054 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:58:09.331000 audit[3054]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc069b820 a2=0 a3=7ffdc069b80c items=0 ppid=2937 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.331000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 12:58:09.335000 audit[3056]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3056 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:58:09.335000 audit[3056]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffca4dd9fa0 a2=0 a3=7ffca4dd9f8c items=0 ppid=2937 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.335000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 16 12:58:09.340000 audit[3059]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3059 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:58:09.340000 audit[3059]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff08fcfce0 a2=0 a3=7fff08fcfccc items=0 ppid=2937 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.340000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 12:58:09.341000 audit[3060]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3060 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:58:09.341000 audit[3060]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb1f7af30 a2=0 a3=7ffdb1f7af1c items=0 ppid=2937 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.341000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 12:58:09.344000 audit[3062]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3062 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:58:09.344000 audit[3062]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe27457eb0 a2=0 a3=7ffe27457e9c items=0 ppid=2937 pid=3062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.344000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 12:58:09.346000 audit[3063]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3063 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:58:09.346000 audit[3063]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeb24fb490 a2=0 a3=7ffeb24fb47c items=0 ppid=2937 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.346000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 12:58:09.349000 audit[3065]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3065 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:58:09.349000 audit[3065]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdf0fa2830 a2=0 a3=7ffdf0fa281c items=0 ppid=2937 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.349000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 12:58:09.354000 audit[3068]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3068 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:58:09.354000 audit[3068]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc266350b0 a2=0 a3=7ffc2663509c items=0 ppid=2937 pid=3068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.354000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 12:58:09.359000 audit[3071]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3071 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:58:09.359000 audit[3071]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcf8e3c110 a2=0 a3=7ffcf8e3c0fc items=0 ppid=2937 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.359000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 16 12:58:09.361000 audit[3072]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3072 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:58:09.361000 audit[3072]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcd368c120 a2=0 a3=7ffcd368c10c items=0 ppid=2937 pid=3072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.361000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 12:58:09.364000 audit[3074]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3074 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:58:09.364000 audit[3074]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffdddd48660 a2=0 a3=7ffdddd4864c items=0 ppid=2937 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.364000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:58:09.370000 audit[3077]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3077 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:58:09.370000 audit[3077]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffffb8cb740 a2=0 a3=7ffffb8cb72c items=0 ppid=2937 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.370000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 12:58:09.372000 audit[3078]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3078 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:58:09.372000 audit[3078]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdcff89b50 a2=0 a3=7ffdcff89b3c items=0 ppid=2937 pid=3078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.372000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 12:58:09.375000 audit[3080]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3080 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:58:09.375000 audit[3080]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffce1c8ff40 a2=0 a3=7ffce1c8ff2c items=0 ppid=2937 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.375000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 12:58:09.377000 audit[3081]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3081 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:58:09.377000 audit[3081]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff85d800c0 a2=0 a3=7fff85d800ac items=0 ppid=2937 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.377000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 12:58:09.380000 audit[3083]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3083 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:58:09.380000 audit[3083]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffc1394800 a2=0 a3=7fffc13947ec items=0 ppid=2937 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.380000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:58:09.386000 audit[3086]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3086 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 12:58:09.386000 audit[3086]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe59b33ae0 a2=0 a3=7ffe59b33acc items=0 ppid=2937 pid=3086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.386000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 12:58:09.390000 audit[3088]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3088 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 12:58:09.390000 audit[3088]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffffea98220 a2=0 a3=7ffffea9820c items=0 ppid=2937 pid=3088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.390000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:09.391000 audit[3088]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3088 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 12:58:09.391000 audit[3088]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffffea98220 a2=0 a3=7ffffea9820c items=0 ppid=2937 pid=3088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.391000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:09.465692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2646529590.mount: Deactivated successfully. Dec 16 12:58:09.784704 containerd[1605]: time="2025-12-16T12:58:09.784644700Z" level=info msg="connecting to shim 50e7359948ba6ecb289ed00109b42bc69b2af97c2cf0813f09c39ea30fccc865" address="unix:///run/containerd/s/b2aa4deccd1ed00f6e4bc97f38a71bc847a96ae0e185a9912f477e3b23fa42c1" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:58:09.818045 systemd[1]: Started cri-containerd-50e7359948ba6ecb289ed00109b42bc69b2af97c2cf0813f09c39ea30fccc865.scope - libcontainer container 50e7359948ba6ecb289ed00109b42bc69b2af97c2cf0813f09c39ea30fccc865. Dec 16 12:58:09.831000 audit: BPF prog-id=129 op=LOAD Dec 16 12:58:09.832000 audit: BPF prog-id=130 op=LOAD Dec 16 12:58:09.832000 audit[3109]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3097 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530653733353939343862613665636232383965643030313039623432 Dec 16 12:58:09.832000 audit: BPF prog-id=130 op=UNLOAD Dec 16 12:58:09.832000 audit[3109]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3097 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.832000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530653733353939343862613665636232383965643030313039623432 Dec 16 12:58:09.834170 kubelet[2806]: E1216 12:58:09.833588 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:09.833000 audit: BPF prog-id=131 op=LOAD Dec 16 12:58:09.833000 audit[3109]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3097 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530653733353939343862613665636232383965643030313039623432 Dec 16 12:58:09.833000 audit: BPF prog-id=132 op=LOAD Dec 16 12:58:09.833000 audit[3109]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3097 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530653733353939343862613665636232383965643030313039623432 Dec 16 12:58:09.833000 audit: BPF prog-id=132 op=UNLOAD Dec 16 12:58:09.833000 audit[3109]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3097 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530653733353939343862613665636232383965643030313039623432 Dec 16 12:58:09.833000 audit: BPF prog-id=131 op=UNLOAD Dec 16 12:58:09.833000 audit[3109]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3097 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530653733353939343862613665636232383965643030313039623432 Dec 16 12:58:09.833000 audit: BPF prog-id=133 op=LOAD Dec 16 12:58:09.833000 audit[3109]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3097 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:09.833000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530653733353939343862613665636232383965643030313039623432 Dec 16 12:58:09.872226 containerd[1605]: time="2025-12-16T12:58:09.872178259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-747pb,Uid:086930ad-c25a-49e6-86c1-3c0efafa2124,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"50e7359948ba6ecb289ed00109b42bc69b2af97c2cf0813f09c39ea30fccc865\"" Dec 16 12:58:09.873933 containerd[1605]: time="2025-12-16T12:58:09.873711770Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 12:58:12.908997 kubelet[2806]: E1216 12:58:12.908933 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:12.922317 kubelet[2806]: I1216 12:58:12.922257 2806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bmjrk" podStartSLOduration=4.922239609 podStartE2EDuration="4.922239609s" podCreationTimestamp="2025-12-16 12:58:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:58:09.842746128 +0000 UTC m=+6.135884612" watchObservedRunningTime="2025-12-16 12:58:12.922239609 +0000 UTC m=+9.215378093" Dec 16 12:58:13.414612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount149103218.mount: Deactivated successfully. Dec 16 12:58:13.833190 containerd[1605]: time="2025-12-16T12:58:13.833131190Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:13.834241 containerd[1605]: time="2025-12-16T12:58:13.834187324Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=0" Dec 16 12:58:13.835386 containerd[1605]: time="2025-12-16T12:58:13.835349208Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:13.840095 containerd[1605]: time="2025-12-16T12:58:13.840054339Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:13.840562 containerd[1605]: time="2025-12-16T12:58:13.840520089Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.966753055s" Dec 16 12:58:13.840562 containerd[1605]: time="2025-12-16T12:58:13.840558432Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 16 12:58:13.841524 kubelet[2806]: E1216 12:58:13.841421 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:13.845503 containerd[1605]: time="2025-12-16T12:58:13.845429657Z" level=info msg="CreateContainer within sandbox \"50e7359948ba6ecb289ed00109b42bc69b2af97c2cf0813f09c39ea30fccc865\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 12:58:13.857137 containerd[1605]: time="2025-12-16T12:58:13.857099067Z" level=info msg="Container e6842a92492f4237d54b64f82edfaa942854b2276df76a2d19caf6f9b64ee12a: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:58:13.863646 containerd[1605]: time="2025-12-16T12:58:13.863613615Z" level=info msg="CreateContainer within sandbox \"50e7359948ba6ecb289ed00109b42bc69b2af97c2cf0813f09c39ea30fccc865\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e6842a92492f4237d54b64f82edfaa942854b2276df76a2d19caf6f9b64ee12a\"" Dec 16 12:58:13.863897 containerd[1605]: time="2025-12-16T12:58:13.863868766Z" level=info msg="StartContainer for \"e6842a92492f4237d54b64f82edfaa942854b2276df76a2d19caf6f9b64ee12a\"" Dec 16 12:58:13.864548 containerd[1605]: time="2025-12-16T12:58:13.864520668Z" level=info msg="connecting to shim e6842a92492f4237d54b64f82edfaa942854b2276df76a2d19caf6f9b64ee12a" address="unix:///run/containerd/s/b2aa4deccd1ed00f6e4bc97f38a71bc847a96ae0e185a9912f477e3b23fa42c1" protocol=ttrpc version=3 Dec 16 12:58:13.886006 systemd[1]: Started cri-containerd-e6842a92492f4237d54b64f82edfaa942854b2276df76a2d19caf6f9b64ee12a.scope - libcontainer container e6842a92492f4237d54b64f82edfaa942854b2276df76a2d19caf6f9b64ee12a. Dec 16 12:58:13.899000 audit: BPF prog-id=134 op=LOAD Dec 16 12:58:13.901481 kernel: kauditd_printk_skb: 202 callbacks suppressed Dec 16 12:58:13.901556 kernel: audit: type=1334 audit(1765889893.899:478): prog-id=134 op=LOAD Dec 16 12:58:13.900000 audit: BPF prog-id=135 op=LOAD Dec 16 12:58:13.903939 kernel: audit: type=1334 audit(1765889893.900:479): prog-id=135 op=LOAD Dec 16 12:58:13.900000 audit[3142]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3097 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:13.909370 kernel: audit: type=1300 audit(1765889893.900:479): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3097 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:13.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536383432613932343932663432333764353462363466383265646661 Dec 16 12:58:13.914456 kernel: audit: type=1327 audit(1765889893.900:479): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536383432613932343932663432333764353462363466383265646661 Dec 16 12:58:13.900000 audit: BPF prog-id=135 op=UNLOAD Dec 16 12:58:13.916147 kernel: audit: type=1334 audit(1765889893.900:480): prog-id=135 op=UNLOAD Dec 16 12:58:13.900000 audit[3142]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3097 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:13.921480 kernel: audit: type=1300 audit(1765889893.900:480): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3097 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:13.926562 kernel: audit: type=1327 audit(1765889893.900:480): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536383432613932343932663432333764353462363466383265646661 Dec 16 12:58:13.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536383432613932343932663432333764353462363466383265646661 Dec 16 12:58:13.900000 audit: BPF prog-id=136 op=LOAD Dec 16 12:58:13.900000 audit[3142]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3097 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:13.934544 kernel: audit: type=1334 audit(1765889893.900:481): prog-id=136 op=LOAD Dec 16 12:58:13.934666 kernel: audit: type=1300 audit(1765889893.900:481): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3097 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:13.934699 kernel: audit: type=1327 audit(1765889893.900:481): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536383432613932343932663432333764353462363466383265646661 Dec 16 12:58:13.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536383432613932343932663432333764353462363466383265646661 Dec 16 12:58:13.900000 audit: BPF prog-id=137 op=LOAD Dec 16 12:58:13.900000 audit[3142]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3097 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:13.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536383432613932343932663432333764353462363466383265646661 Dec 16 12:58:13.900000 audit: BPF prog-id=137 op=UNLOAD Dec 16 12:58:13.900000 audit[3142]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3097 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:13.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536383432613932343932663432333764353462363466383265646661 Dec 16 12:58:13.900000 audit: BPF prog-id=136 op=UNLOAD Dec 16 12:58:13.900000 audit[3142]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3097 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:13.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536383432613932343932663432333764353462363466383265646661 Dec 16 12:58:13.900000 audit: BPF prog-id=138 op=LOAD Dec 16 12:58:13.900000 audit[3142]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3097 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:13.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536383432613932343932663432333764353462363466383265646661 Dec 16 12:58:13.941616 containerd[1605]: time="2025-12-16T12:58:13.941575472Z" level=info msg="StartContainer for \"e6842a92492f4237d54b64f82edfaa942854b2276df76a2d19caf6f9b64ee12a\" returns successfully" Dec 16 12:58:14.876635 kubelet[2806]: I1216 12:58:14.876535 2806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-747pb" podStartSLOduration=2.908702558 podStartE2EDuration="6.876506839s" podCreationTimestamp="2025-12-16 12:58:08 +0000 UTC" firstStartedPulling="2025-12-16 12:58:09.873425037 +0000 UTC m=+6.166563521" lastFinishedPulling="2025-12-16 12:58:13.841229318 +0000 UTC m=+10.134367802" observedRunningTime="2025-12-16 12:58:14.875584159 +0000 UTC m=+11.168722643" watchObservedRunningTime="2025-12-16 12:58:14.876506839 +0000 UTC m=+11.169645353" Dec 16 12:58:15.916710 kubelet[2806]: E1216 12:58:15.916461 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:16.017615 systemd[1]: cri-containerd-e6842a92492f4237d54b64f82edfaa942854b2276df76a2d19caf6f9b64ee12a.scope: Deactivated successfully. Dec 16 12:58:16.020806 containerd[1605]: time="2025-12-16T12:58:16.020756444Z" level=info msg="received container exit event container_id:\"e6842a92492f4237d54b64f82edfaa942854b2276df76a2d19caf6f9b64ee12a\" id:\"e6842a92492f4237d54b64f82edfaa942854b2276df76a2d19caf6f9b64ee12a\" pid:3156 exit_status:1 exited_at:{seconds:1765889896 nanos:20214772}" Dec 16 12:58:16.020000 audit: BPF prog-id=134 op=UNLOAD Dec 16 12:58:16.020000 audit: BPF prog-id=138 op=UNLOAD Dec 16 12:58:16.044610 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e6842a92492f4237d54b64f82edfaa942854b2276df76a2d19caf6f9b64ee12a-rootfs.mount: Deactivated successfully. Dec 16 12:58:16.467089 kubelet[2806]: E1216 12:58:16.467027 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:16.849121 kubelet[2806]: I1216 12:58:16.849075 2806 scope.go:117] "RemoveContainer" containerID="e6842a92492f4237d54b64f82edfaa942854b2276df76a2d19caf6f9b64ee12a" Dec 16 12:58:16.850739 containerd[1605]: time="2025-12-16T12:58:16.850691357Z" level=info msg="CreateContainer within sandbox \"50e7359948ba6ecb289ed00109b42bc69b2af97c2cf0813f09c39ea30fccc865\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 16 12:58:16.860858 containerd[1605]: time="2025-12-16T12:58:16.860392950Z" level=info msg="Container e0f5527233020c07078e0be932b7c5c8e7bd8abd725c8b2ae200f09ea7953ca0: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:58:16.869179 containerd[1605]: time="2025-12-16T12:58:16.869065213Z" level=info msg="CreateContainer within sandbox \"50e7359948ba6ecb289ed00109b42bc69b2af97c2cf0813f09c39ea30fccc865\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"e0f5527233020c07078e0be932b7c5c8e7bd8abd725c8b2ae200f09ea7953ca0\"" Dec 16 12:58:16.870845 containerd[1605]: time="2025-12-16T12:58:16.869877504Z" level=info msg="StartContainer for \"e0f5527233020c07078e0be932b7c5c8e7bd8abd725c8b2ae200f09ea7953ca0\"" Dec 16 12:58:16.870845 containerd[1605]: time="2025-12-16T12:58:16.870646825Z" level=info msg="connecting to shim e0f5527233020c07078e0be932b7c5c8e7bd8abd725c8b2ae200f09ea7953ca0" address="unix:///run/containerd/s/b2aa4deccd1ed00f6e4bc97f38a71bc847a96ae0e185a9912f477e3b23fa42c1" protocol=ttrpc version=3 Dec 16 12:58:16.899999 systemd[1]: Started cri-containerd-e0f5527233020c07078e0be932b7c5c8e7bd8abd725c8b2ae200f09ea7953ca0.scope - libcontainer container e0f5527233020c07078e0be932b7c5c8e7bd8abd725c8b2ae200f09ea7953ca0. Dec 16 12:58:16.914000 audit: BPF prog-id=139 op=LOAD Dec 16 12:58:16.915000 audit: BPF prog-id=140 op=LOAD Dec 16 12:58:16.915000 audit[3213]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3097 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:16.915000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530663535323732333330323063303730373865306265393332623763 Dec 16 12:58:16.915000 audit: BPF prog-id=140 op=UNLOAD Dec 16 12:58:16.915000 audit[3213]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3097 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:16.915000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530663535323732333330323063303730373865306265393332623763 Dec 16 12:58:16.915000 audit: BPF prog-id=141 op=LOAD Dec 16 12:58:16.915000 audit[3213]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3097 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:16.915000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530663535323732333330323063303730373865306265393332623763 Dec 16 12:58:16.916000 audit: BPF prog-id=142 op=LOAD Dec 16 12:58:16.916000 audit[3213]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3097 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:16.916000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530663535323732333330323063303730373865306265393332623763 Dec 16 12:58:16.916000 audit: BPF prog-id=142 op=UNLOAD Dec 16 12:58:16.916000 audit[3213]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3097 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:16.916000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530663535323732333330323063303730373865306265393332623763 Dec 16 12:58:16.916000 audit: BPF prog-id=141 op=UNLOAD Dec 16 12:58:16.916000 audit[3213]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3097 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:16.916000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530663535323732333330323063303730373865306265393332623763 Dec 16 12:58:16.916000 audit: BPF prog-id=143 op=LOAD Dec 16 12:58:16.916000 audit[3213]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3097 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:16.916000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530663535323732333330323063303730373865306265393332623763 Dec 16 12:58:16.942615 containerd[1605]: time="2025-12-16T12:58:16.942553914Z" level=info msg="StartContainer for \"e0f5527233020c07078e0be932b7c5c8e7bd8abd725c8b2ae200f09ea7953ca0\" returns successfully" Dec 16 12:58:19.842038 sudo[1824]: pam_unix(sudo:session): session closed for user root Dec 16 12:58:19.841000 audit[1824]: USER_END pid=1824 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:58:19.843141 kernel: kauditd_printk_skb: 36 callbacks suppressed Dec 16 12:58:19.843194 kernel: audit: type=1106 audit(1765889899.841:496): pid=1824 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:58:19.843737 sshd[1823]: Connection closed by 10.0.0.1 port 50768 Dec 16 12:58:19.844118 sshd-session[1820]: pam_unix(sshd:session): session closed for user core Dec 16 12:58:19.841000 audit[1824]: CRED_DISP pid=1824 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:58:19.848700 systemd[1]: sshd@6-10.0.0.102:22-10.0.0.1:50768.service: Deactivated successfully. Dec 16 12:58:19.850950 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 12:58:19.851196 systemd[1]: session-7.scope: Consumed 5.866s CPU time, 217.3M memory peak. Dec 16 12:58:19.851885 kernel: audit: type=1104 audit(1765889899.841:497): pid=1824 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 12:58:19.844000 audit[1820]: USER_END pid=1820 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:58:19.852380 systemd-logind[1584]: Session 7 logged out. Waiting for processes to exit. Dec 16 12:58:19.854015 systemd-logind[1584]: Removed session 7. Dec 16 12:58:19.857543 kernel: audit: type=1106 audit(1765889899.844:498): pid=1820 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:58:19.857601 kernel: audit: type=1104 audit(1765889899.844:499): pid=1820 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:58:19.844000 audit[1820]: CRED_DISP pid=1820 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:58:19.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.102:22-10.0.0.1:50768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:58:19.866125 kernel: audit: type=1131 audit(1765889899.848:500): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.102:22-10.0.0.1:50768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:58:22.748000 audit[3279]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3279 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:22.758953 kernel: audit: type=1325 audit(1765889902.748:501): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3279 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:22.759020 kernel: audit: type=1300 audit(1765889902.748:501): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff8a447390 a2=0 a3=7fff8a44737c items=0 ppid=2937 pid=3279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:22.748000 audit[3279]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff8a447390 a2=0 a3=7fff8a44737c items=0 ppid=2937 pid=3279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:22.761842 kernel: audit: type=1327 audit(1765889902.748:501): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:22.748000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:22.764675 kernel: audit: type=1325 audit(1765889902.759:502): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3279 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:22.759000 audit[3279]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3279 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:22.770191 kernel: audit: type=1300 audit(1765889902.759:502): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff8a447390 a2=0 a3=0 items=0 ppid=2937 pid=3279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:22.759000 audit[3279]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff8a447390 a2=0 a3=0 items=0 ppid=2937 pid=3279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:22.759000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:22.775000 audit[3281]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3281 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:22.775000 audit[3281]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffebf77df0 a2=0 a3=7fffebf77ddc items=0 ppid=2937 pid=3281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:22.775000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:22.784000 audit[3281]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3281 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:22.784000 audit[3281]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffebf77df0 a2=0 a3=0 items=0 ppid=2937 pid=3281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:22.784000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:24.604000 audit[3283]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3283 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:24.604000 audit[3283]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc5c557930 a2=0 a3=7ffc5c55791c items=0 ppid=2937 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:24.604000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:24.608000 audit[3283]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3283 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:24.608000 audit[3283]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc5c557930 a2=0 a3=0 items=0 ppid=2937 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:24.608000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:25.622000 audit[3285]: NETFILTER_CFG table=filter:111 family=2 entries=19 op=nft_register_rule pid=3285 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:25.628313 kernel: kauditd_printk_skb: 13 callbacks suppressed Dec 16 12:58:25.628459 kernel: audit: type=1325 audit(1765889905.622:507): table=filter:111 family=2 entries=19 op=nft_register_rule pid=3285 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:25.622000 audit[3285]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd211f32b0 a2=0 a3=7ffd211f329c items=0 ppid=2937 pid=3285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:25.635717 kernel: audit: type=1300 audit(1765889905.622:507): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd211f32b0 a2=0 a3=7ffd211f329c items=0 ppid=2937 pid=3285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:25.622000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:25.639863 kernel: audit: type=1327 audit(1765889905.622:507): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:25.648000 audit[3285]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3285 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:25.660062 kernel: audit: type=1325 audit(1765889905.648:508): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3285 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:25.660155 kernel: audit: type=1300 audit(1765889905.648:508): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd211f32b0 a2=0 a3=0 items=0 ppid=2937 pid=3285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:25.648000 audit[3285]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd211f32b0 a2=0 a3=0 items=0 ppid=2937 pid=3285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:25.663840 kernel: audit: type=1327 audit(1765889905.648:508): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:25.648000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:26.380000 audit[3287]: NETFILTER_CFG table=filter:113 family=2 entries=21 op=nft_register_rule pid=3287 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:26.380000 audit[3287]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fffbed3feb0 a2=0 a3=7fffbed3fe9c items=0 ppid=2937 pid=3287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:26.393763 kernel: audit: type=1325 audit(1765889906.380:509): table=filter:113 family=2 entries=21 op=nft_register_rule pid=3287 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:26.394080 kernel: audit: type=1300 audit(1765889906.380:509): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fffbed3feb0 a2=0 a3=7fffbed3fe9c items=0 ppid=2937 pid=3287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:26.380000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:26.400857 kernel: audit: type=1327 audit(1765889906.380:509): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:26.395000 audit[3287]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3287 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:26.395000 audit[3287]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffbed3feb0 a2=0 a3=0 items=0 ppid=2937 pid=3287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:26.395000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:26.406989 kernel: audit: type=1325 audit(1765889906.395:510): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3287 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:26.417813 systemd[1]: Created slice kubepods-besteffort-podb3587978_9904_4c6c_b55d_05ca8e33e827.slice - libcontainer container kubepods-besteffort-podb3587978_9904_4c6c_b55d_05ca8e33e827.slice. Dec 16 12:58:26.511253 kubelet[2806]: I1216 12:58:26.511198 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3587978-9904-4c6c-b55d-05ca8e33e827-tigera-ca-bundle\") pod \"calico-typha-5b988b4f67-gmqwn\" (UID: \"b3587978-9904-4c6c-b55d-05ca8e33e827\") " pod="calico-system/calico-typha-5b988b4f67-gmqwn" Dec 16 12:58:26.511253 kubelet[2806]: I1216 12:58:26.511238 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dlhp\" (UniqueName: \"kubernetes.io/projected/b3587978-9904-4c6c-b55d-05ca8e33e827-kube-api-access-4dlhp\") pod \"calico-typha-5b988b4f67-gmqwn\" (UID: \"b3587978-9904-4c6c-b55d-05ca8e33e827\") " pod="calico-system/calico-typha-5b988b4f67-gmqwn" Dec 16 12:58:26.511253 kubelet[2806]: I1216 12:58:26.511255 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b3587978-9904-4c6c-b55d-05ca8e33e827-typha-certs\") pod \"calico-typha-5b988b4f67-gmqwn\" (UID: \"b3587978-9904-4c6c-b55d-05ca8e33e827\") " pod="calico-system/calico-typha-5b988b4f67-gmqwn" Dec 16 12:58:26.595078 systemd[1]: Created slice kubepods-besteffort-pod2f978adc_b014_4c32_bcde_415d20d3d251.slice - libcontainer container kubepods-besteffort-pod2f978adc_b014_4c32_bcde_415d20d3d251.slice. Dec 16 12:58:26.611956 kubelet[2806]: I1216 12:58:26.611609 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2f978adc-b014-4c32-bcde-415d20d3d251-var-run-calico\") pod \"calico-node-vrqxg\" (UID: \"2f978adc-b014-4c32-bcde-415d20d3d251\") " pod="calico-system/calico-node-vrqxg" Dec 16 12:58:26.611956 kubelet[2806]: I1216 12:58:26.611653 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh9b5\" (UniqueName: \"kubernetes.io/projected/2f978adc-b014-4c32-bcde-415d20d3d251-kube-api-access-rh9b5\") pod \"calico-node-vrqxg\" (UID: \"2f978adc-b014-4c32-bcde-415d20d3d251\") " pod="calico-system/calico-node-vrqxg" Dec 16 12:58:26.611956 kubelet[2806]: I1216 12:58:26.611676 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2f978adc-b014-4c32-bcde-415d20d3d251-cni-net-dir\") pod \"calico-node-vrqxg\" (UID: \"2f978adc-b014-4c32-bcde-415d20d3d251\") " pod="calico-system/calico-node-vrqxg" Dec 16 12:58:26.611956 kubelet[2806]: I1216 12:58:26.611692 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2f978adc-b014-4c32-bcde-415d20d3d251-cni-log-dir\") pod \"calico-node-vrqxg\" (UID: \"2f978adc-b014-4c32-bcde-415d20d3d251\") " pod="calico-system/calico-node-vrqxg" Dec 16 12:58:26.611956 kubelet[2806]: I1216 12:58:26.611706 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2f978adc-b014-4c32-bcde-415d20d3d251-flexvol-driver-host\") pod \"calico-node-vrqxg\" (UID: \"2f978adc-b014-4c32-bcde-415d20d3d251\") " pod="calico-system/calico-node-vrqxg" Dec 16 12:58:26.612220 kubelet[2806]: I1216 12:58:26.611721 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2f978adc-b014-4c32-bcde-415d20d3d251-cni-bin-dir\") pod \"calico-node-vrqxg\" (UID: \"2f978adc-b014-4c32-bcde-415d20d3d251\") " pod="calico-system/calico-node-vrqxg" Dec 16 12:58:26.612220 kubelet[2806]: I1216 12:58:26.611733 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f978adc-b014-4c32-bcde-415d20d3d251-lib-modules\") pod \"calico-node-vrqxg\" (UID: \"2f978adc-b014-4c32-bcde-415d20d3d251\") " pod="calico-system/calico-node-vrqxg" Dec 16 12:58:26.612220 kubelet[2806]: I1216 12:58:26.611745 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2f978adc-b014-4c32-bcde-415d20d3d251-var-lib-calico\") pod \"calico-node-vrqxg\" (UID: \"2f978adc-b014-4c32-bcde-415d20d3d251\") " pod="calico-system/calico-node-vrqxg" Dec 16 12:58:26.612220 kubelet[2806]: I1216 12:58:26.611814 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2f978adc-b014-4c32-bcde-415d20d3d251-node-certs\") pod \"calico-node-vrqxg\" (UID: \"2f978adc-b014-4c32-bcde-415d20d3d251\") " pod="calico-system/calico-node-vrqxg" Dec 16 12:58:26.612220 kubelet[2806]: I1216 12:58:26.611859 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2f978adc-b014-4c32-bcde-415d20d3d251-policysync\") pod \"calico-node-vrqxg\" (UID: \"2f978adc-b014-4c32-bcde-415d20d3d251\") " pod="calico-system/calico-node-vrqxg" Dec 16 12:58:26.612357 kubelet[2806]: I1216 12:58:26.611878 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f978adc-b014-4c32-bcde-415d20d3d251-xtables-lock\") pod \"calico-node-vrqxg\" (UID: \"2f978adc-b014-4c32-bcde-415d20d3d251\") " pod="calico-system/calico-node-vrqxg" Dec 16 12:58:26.612357 kubelet[2806]: I1216 12:58:26.611895 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f978adc-b014-4c32-bcde-415d20d3d251-tigera-ca-bundle\") pod \"calico-node-vrqxg\" (UID: \"2f978adc-b014-4c32-bcde-415d20d3d251\") " pod="calico-system/calico-node-vrqxg" Dec 16 12:58:26.715758 kubelet[2806]: E1216 12:58:26.715650 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.715758 kubelet[2806]: W1216 12:58:26.715674 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.715758 kubelet[2806]: E1216 12:58:26.715700 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.717893 kubelet[2806]: E1216 12:58:26.717854 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.717893 kubelet[2806]: W1216 12:58:26.717878 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.717988 kubelet[2806]: E1216 12:58:26.717897 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.723491 kubelet[2806]: E1216 12:58:26.723360 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:26.723491 kubelet[2806]: E1216 12:58:26.723395 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.723491 kubelet[2806]: W1216 12:58:26.723417 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.723491 kubelet[2806]: E1216 12:58:26.723427 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.724168 containerd[1605]: time="2025-12-16T12:58:26.724133262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b988b4f67-gmqwn,Uid:b3587978-9904-4c6c-b55d-05ca8e33e827,Namespace:calico-system,Attempt:0,}" Dec 16 12:58:26.748850 containerd[1605]: time="2025-12-16T12:58:26.748605037Z" level=info msg="connecting to shim 125ba2e5bcbb8a07788ec037ca7ad6fc2acf91a7b43252e56486401582655ced" address="unix:///run/containerd/s/eabf05b1af58b74a2ea32041b6539b6deb43908152328e3c8080bfa3fc57cbbd" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:58:26.782191 systemd[1]: Started cri-containerd-125ba2e5bcbb8a07788ec037ca7ad6fc2acf91a7b43252e56486401582655ced.scope - libcontainer container 125ba2e5bcbb8a07788ec037ca7ad6fc2acf91a7b43252e56486401582655ced. Dec 16 12:58:26.799000 audit: BPF prog-id=144 op=LOAD Dec 16 12:58:26.800000 audit: BPF prog-id=145 op=LOAD Dec 16 12:58:26.800000 audit[3314]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3303 pid=3314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:26.800000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132356261326535626362623861303737383865633033376361376164 Dec 16 12:58:26.800000 audit: BPF prog-id=145 op=UNLOAD Dec 16 12:58:26.800000 audit[3314]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3303 pid=3314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:26.800000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132356261326535626362623861303737383865633033376361376164 Dec 16 12:58:26.800000 audit: BPF prog-id=146 op=LOAD Dec 16 12:58:26.800000 audit[3314]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3303 pid=3314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:26.800000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132356261326535626362623861303737383865633033376361376164 Dec 16 12:58:26.800000 audit: BPF prog-id=147 op=LOAD Dec 16 12:58:26.800000 audit[3314]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3303 pid=3314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:26.800000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132356261326535626362623861303737383865633033376361376164 Dec 16 12:58:26.800000 audit: BPF prog-id=147 op=UNLOAD Dec 16 12:58:26.800000 audit[3314]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3303 pid=3314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:26.800000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132356261326535626362623861303737383865633033376361376164 Dec 16 12:58:26.800000 audit: BPF prog-id=146 op=UNLOAD Dec 16 12:58:26.800000 audit[3314]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3303 pid=3314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:26.800000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132356261326535626362623861303737383865633033376361376164 Dec 16 12:58:26.800000 audit: BPF prog-id=148 op=LOAD Dec 16 12:58:26.800000 audit[3314]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3303 pid=3314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:26.800000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132356261326535626362623861303737383865633033376361376164 Dec 16 12:58:26.804566 kubelet[2806]: E1216 12:58:26.804522 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4lx5v" podUID="4e3b9dec-c7ec-4533-9b5f-135d8bcc981d" Dec 16 12:58:26.808947 kubelet[2806]: E1216 12:58:26.808915 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.808998 kubelet[2806]: W1216 12:58:26.808944 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.808998 kubelet[2806]: E1216 12:58:26.808970 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.809884 kubelet[2806]: E1216 12:58:26.809859 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.810586 kubelet[2806]: W1216 12:58:26.809904 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.810586 kubelet[2806]: E1216 12:58:26.810581 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.811848 kubelet[2806]: E1216 12:58:26.810990 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.811848 kubelet[2806]: W1216 12:58:26.811006 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.811848 kubelet[2806]: E1216 12:58:26.811017 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.811848 kubelet[2806]: E1216 12:58:26.811330 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.811848 kubelet[2806]: W1216 12:58:26.811339 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.811848 kubelet[2806]: E1216 12:58:26.811351 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.811848 kubelet[2806]: E1216 12:58:26.811578 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.811848 kubelet[2806]: W1216 12:58:26.811586 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.811848 kubelet[2806]: E1216 12:58:26.811599 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.811848 kubelet[2806]: E1216 12:58:26.811804 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.812089 kubelet[2806]: W1216 12:58:26.811813 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.812089 kubelet[2806]: E1216 12:58:26.811843 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.812089 kubelet[2806]: E1216 12:58:26.812070 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.812089 kubelet[2806]: W1216 12:58:26.812080 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.812089 kubelet[2806]: E1216 12:58:26.812089 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.812439 kubelet[2806]: E1216 12:58:26.812413 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.812439 kubelet[2806]: W1216 12:58:26.812431 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.812492 kubelet[2806]: E1216 12:58:26.812441 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.813112 kubelet[2806]: E1216 12:58:26.813091 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.813144 kubelet[2806]: W1216 12:58:26.813131 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.813144 kubelet[2806]: E1216 12:58:26.813142 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.813567 kubelet[2806]: E1216 12:58:26.813426 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.813567 kubelet[2806]: W1216 12:58:26.813440 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.813803 kubelet[2806]: E1216 12:58:26.813660 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.814843 kubelet[2806]: E1216 12:58:26.814727 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.814843 kubelet[2806]: W1216 12:58:26.814740 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.814843 kubelet[2806]: E1216 12:58:26.814752 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.815719 kubelet[2806]: E1216 12:58:26.815698 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.816066 kubelet[2806]: W1216 12:58:26.816007 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.816066 kubelet[2806]: E1216 12:58:26.816030 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.816935 kubelet[2806]: E1216 12:58:26.816900 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.817013 kubelet[2806]: W1216 12:58:26.816992 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.817212 kubelet[2806]: E1216 12:58:26.817189 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.818136 kubelet[2806]: E1216 12:58:26.817959 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.818136 kubelet[2806]: W1216 12:58:26.818129 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.818202 kubelet[2806]: E1216 12:58:26.818147 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.818743 kubelet[2806]: E1216 12:58:26.818725 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.818743 kubelet[2806]: W1216 12:58:26.818739 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.818817 kubelet[2806]: E1216 12:58:26.818750 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.819142 kubelet[2806]: E1216 12:58:26.819114 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.819142 kubelet[2806]: W1216 12:58:26.819127 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.819142 kubelet[2806]: E1216 12:58:26.819136 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.819542 kubelet[2806]: E1216 12:58:26.819524 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.819542 kubelet[2806]: W1216 12:58:26.819536 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.819600 kubelet[2806]: E1216 12:58:26.819545 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.819793 kubelet[2806]: E1216 12:58:26.819774 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.819793 kubelet[2806]: W1216 12:58:26.819788 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.819897 kubelet[2806]: E1216 12:58:26.819798 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.820056 kubelet[2806]: E1216 12:58:26.820038 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.820056 kubelet[2806]: W1216 12:58:26.820051 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.820122 kubelet[2806]: E1216 12:58:26.820059 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.820267 kubelet[2806]: E1216 12:58:26.820248 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.820267 kubelet[2806]: W1216 12:58:26.820263 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.820339 kubelet[2806]: E1216 12:58:26.820288 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.820659 kubelet[2806]: E1216 12:58:26.820640 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.820659 kubelet[2806]: W1216 12:58:26.820655 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.820659 kubelet[2806]: E1216 12:58:26.820665 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.820742 kubelet[2806]: I1216 12:58:26.820686 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4e3b9dec-c7ec-4533-9b5f-135d8bcc981d-varrun\") pod \"csi-node-driver-4lx5v\" (UID: \"4e3b9dec-c7ec-4533-9b5f-135d8bcc981d\") " pod="calico-system/csi-node-driver-4lx5v" Dec 16 12:58:26.820965 kubelet[2806]: E1216 12:58:26.820946 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.820965 kubelet[2806]: W1216 12:58:26.820961 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.821024 kubelet[2806]: E1216 12:58:26.820971 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.821024 kubelet[2806]: I1216 12:58:26.820984 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4e3b9dec-c7ec-4533-9b5f-135d8bcc981d-kubelet-dir\") pod \"csi-node-driver-4lx5v\" (UID: \"4e3b9dec-c7ec-4533-9b5f-135d8bcc981d\") " pod="calico-system/csi-node-driver-4lx5v" Dec 16 12:58:26.821216 kubelet[2806]: E1216 12:58:26.821198 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.821216 kubelet[2806]: W1216 12:58:26.821211 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.821282 kubelet[2806]: E1216 12:58:26.821220 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.821282 kubelet[2806]: I1216 12:58:26.821244 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4e3b9dec-c7ec-4533-9b5f-135d8bcc981d-registration-dir\") pod \"csi-node-driver-4lx5v\" (UID: \"4e3b9dec-c7ec-4533-9b5f-135d8bcc981d\") " pod="calico-system/csi-node-driver-4lx5v" Dec 16 12:58:26.821573 kubelet[2806]: E1216 12:58:26.821556 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.821573 kubelet[2806]: W1216 12:58:26.821568 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.821647 kubelet[2806]: E1216 12:58:26.821577 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.821855 kubelet[2806]: E1216 12:58:26.821834 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.821855 kubelet[2806]: W1216 12:58:26.821849 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.821855 kubelet[2806]: E1216 12:58:26.821857 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.822110 kubelet[2806]: E1216 12:58:26.822091 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.822110 kubelet[2806]: W1216 12:58:26.822106 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.822184 kubelet[2806]: E1216 12:58:26.822116 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.822401 kubelet[2806]: E1216 12:58:26.822381 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.822401 kubelet[2806]: W1216 12:58:26.822396 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.822463 kubelet[2806]: E1216 12:58:26.822406 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.822666 kubelet[2806]: E1216 12:58:26.822640 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.822666 kubelet[2806]: W1216 12:58:26.822654 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.822666 kubelet[2806]: E1216 12:58:26.822664 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.822735 kubelet[2806]: I1216 12:58:26.822697 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4e3b9dec-c7ec-4533-9b5f-135d8bcc981d-socket-dir\") pod \"csi-node-driver-4lx5v\" (UID: \"4e3b9dec-c7ec-4533-9b5f-135d8bcc981d\") " pod="calico-system/csi-node-driver-4lx5v" Dec 16 12:58:26.823046 kubelet[2806]: E1216 12:58:26.823017 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.823046 kubelet[2806]: W1216 12:58:26.823036 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.823113 kubelet[2806]: E1216 12:58:26.823046 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.823186 kubelet[2806]: I1216 12:58:26.823154 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zw4v\" (UniqueName: \"kubernetes.io/projected/4e3b9dec-c7ec-4533-9b5f-135d8bcc981d-kube-api-access-2zw4v\") pod \"csi-node-driver-4lx5v\" (UID: \"4e3b9dec-c7ec-4533-9b5f-135d8bcc981d\") " pod="calico-system/csi-node-driver-4lx5v" Dec 16 12:58:26.823391 kubelet[2806]: E1216 12:58:26.823361 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.823391 kubelet[2806]: W1216 12:58:26.823378 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.823391 kubelet[2806]: E1216 12:58:26.823390 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.823724 kubelet[2806]: E1216 12:58:26.823705 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.823724 kubelet[2806]: W1216 12:58:26.823721 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.823787 kubelet[2806]: E1216 12:58:26.823733 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.824154 kubelet[2806]: E1216 12:58:26.824088 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.824154 kubelet[2806]: W1216 12:58:26.824103 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.824154 kubelet[2806]: E1216 12:58:26.824114 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.824358 kubelet[2806]: E1216 12:58:26.824340 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.824358 kubelet[2806]: W1216 12:58:26.824355 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.824426 kubelet[2806]: E1216 12:58:26.824366 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.824600 kubelet[2806]: E1216 12:58:26.824581 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.824600 kubelet[2806]: W1216 12:58:26.824595 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.824668 kubelet[2806]: E1216 12:58:26.824605 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.824859 kubelet[2806]: E1216 12:58:26.824812 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.824859 kubelet[2806]: W1216 12:58:26.824845 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.824859 kubelet[2806]: E1216 12:58:26.824855 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.840846 containerd[1605]: time="2025-12-16T12:58:26.840782549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b988b4f67-gmqwn,Uid:b3587978-9904-4c6c-b55d-05ca8e33e827,Namespace:calico-system,Attempt:0,} returns sandbox id \"125ba2e5bcbb8a07788ec037ca7ad6fc2acf91a7b43252e56486401582655ced\"" Dec 16 12:58:26.844509 kubelet[2806]: E1216 12:58:26.844482 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:26.848244 containerd[1605]: time="2025-12-16T12:58:26.848194990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 12:58:26.898203 kubelet[2806]: E1216 12:58:26.898161 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:26.898770 containerd[1605]: time="2025-12-16T12:58:26.898717600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vrqxg,Uid:2f978adc-b014-4c32-bcde-415d20d3d251,Namespace:calico-system,Attempt:0,}" Dec 16 12:58:26.924370 kubelet[2806]: E1216 12:58:26.924326 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.924370 kubelet[2806]: W1216 12:58:26.924356 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.924370 kubelet[2806]: E1216 12:58:26.924383 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.924561 containerd[1605]: time="2025-12-16T12:58:26.924328468Z" level=info msg="connecting to shim 27a0ccf4ee113f1159053578dab244fdd9db75a44c56bc30812920e6e5121695" address="unix:///run/containerd/s/157f9e5b4af118944988a8cd012c7837aa74cf01d6ad26679c715693005e6b9e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:58:26.924738 kubelet[2806]: E1216 12:58:26.924722 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.924738 kubelet[2806]: W1216 12:58:26.924734 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.924738 kubelet[2806]: E1216 12:58:26.924743 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.925028 kubelet[2806]: E1216 12:58:26.925000 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.925028 kubelet[2806]: W1216 12:58:26.925013 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.925028 kubelet[2806]: E1216 12:58:26.925021 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.925219 kubelet[2806]: E1216 12:58:26.925197 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.925219 kubelet[2806]: W1216 12:58:26.925204 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.925219 kubelet[2806]: E1216 12:58:26.925212 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.925449 kubelet[2806]: E1216 12:58:26.925434 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.925449 kubelet[2806]: W1216 12:58:26.925446 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.925507 kubelet[2806]: E1216 12:58:26.925463 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.925699 kubelet[2806]: E1216 12:58:26.925683 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.925699 kubelet[2806]: W1216 12:58:26.925693 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.925754 kubelet[2806]: E1216 12:58:26.925701 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.925938 kubelet[2806]: E1216 12:58:26.925919 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.925938 kubelet[2806]: W1216 12:58:26.925930 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.925938 kubelet[2806]: E1216 12:58:26.925939 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.926194 kubelet[2806]: E1216 12:58:26.926165 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.926227 kubelet[2806]: W1216 12:58:26.926190 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.926227 kubelet[2806]: E1216 12:58:26.926215 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.926483 kubelet[2806]: E1216 12:58:26.926455 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.926483 kubelet[2806]: W1216 12:58:26.926468 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.926483 kubelet[2806]: E1216 12:58:26.926479 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.926800 kubelet[2806]: E1216 12:58:26.926782 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.926800 kubelet[2806]: W1216 12:58:26.926794 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.926883 kubelet[2806]: E1216 12:58:26.926837 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.927128 kubelet[2806]: E1216 12:58:26.927103 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.927128 kubelet[2806]: W1216 12:58:26.927115 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.927128 kubelet[2806]: E1216 12:58:26.927125 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.927404 kubelet[2806]: E1216 12:58:26.927379 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.927439 kubelet[2806]: W1216 12:58:26.927392 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.927439 kubelet[2806]: E1216 12:58:26.927422 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.927671 kubelet[2806]: E1216 12:58:26.927633 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.927671 kubelet[2806]: W1216 12:58:26.927654 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.927671 kubelet[2806]: E1216 12:58:26.927665 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.927965 kubelet[2806]: E1216 12:58:26.927942 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.927965 kubelet[2806]: W1216 12:58:26.927961 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.928026 kubelet[2806]: E1216 12:58:26.927970 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.928220 kubelet[2806]: E1216 12:58:26.928203 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.928220 kubelet[2806]: W1216 12:58:26.928214 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.928308 kubelet[2806]: E1216 12:58:26.928222 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.928515 kubelet[2806]: E1216 12:58:26.928498 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.928545 kubelet[2806]: W1216 12:58:26.928509 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.928545 kubelet[2806]: E1216 12:58:26.928540 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.928946 kubelet[2806]: E1216 12:58:26.928927 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.928946 kubelet[2806]: W1216 12:58:26.928939 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.928946 kubelet[2806]: E1216 12:58:26.928948 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.929324 kubelet[2806]: E1216 12:58:26.929307 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.929324 kubelet[2806]: W1216 12:58:26.929319 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.929391 kubelet[2806]: E1216 12:58:26.929328 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.930605 kubelet[2806]: E1216 12:58:26.930578 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.930605 kubelet[2806]: W1216 12:58:26.930598 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.930605 kubelet[2806]: E1216 12:58:26.930609 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.930968 kubelet[2806]: E1216 12:58:26.930952 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.930968 kubelet[2806]: W1216 12:58:26.930964 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.931039 kubelet[2806]: E1216 12:58:26.930974 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.931314 kubelet[2806]: E1216 12:58:26.931285 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.931314 kubelet[2806]: W1216 12:58:26.931297 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.931314 kubelet[2806]: E1216 12:58:26.931316 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.932876 kubelet[2806]: E1216 12:58:26.931943 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.932876 kubelet[2806]: W1216 12:58:26.931957 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.932876 kubelet[2806]: E1216 12:58:26.931966 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.932876 kubelet[2806]: E1216 12:58:26.932248 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.932876 kubelet[2806]: W1216 12:58:26.932257 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.932876 kubelet[2806]: E1216 12:58:26.932266 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.932876 kubelet[2806]: E1216 12:58:26.932514 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.932876 kubelet[2806]: W1216 12:58:26.932521 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.932876 kubelet[2806]: E1216 12:58:26.932529 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.932876 kubelet[2806]: E1216 12:58:26.932715 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.933145 kubelet[2806]: W1216 12:58:26.932721 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.933145 kubelet[2806]: E1216 12:58:26.932729 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.938068 kubelet[2806]: E1216 12:58:26.938034 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:26.938068 kubelet[2806]: W1216 12:58:26.938050 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:26.938068 kubelet[2806]: E1216 12:58:26.938059 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:26.961081 systemd[1]: Started cri-containerd-27a0ccf4ee113f1159053578dab244fdd9db75a44c56bc30812920e6e5121695.scope - libcontainer container 27a0ccf4ee113f1159053578dab244fdd9db75a44c56bc30812920e6e5121695. Dec 16 12:58:26.974000 audit: BPF prog-id=149 op=LOAD Dec 16 12:58:26.975000 audit: BPF prog-id=150 op=LOAD Dec 16 12:58:26.975000 audit[3426]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3393 pid=3426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:26.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237613063636634656531313366313135393035333537386461623234 Dec 16 12:58:26.975000 audit: BPF prog-id=150 op=UNLOAD Dec 16 12:58:26.975000 audit[3426]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3393 pid=3426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:26.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237613063636634656531313366313135393035333537386461623234 Dec 16 12:58:26.975000 audit: BPF prog-id=151 op=LOAD Dec 16 12:58:26.975000 audit[3426]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3393 pid=3426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:26.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237613063636634656531313366313135393035333537386461623234 Dec 16 12:58:26.975000 audit: BPF prog-id=152 op=LOAD Dec 16 12:58:26.975000 audit[3426]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3393 pid=3426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:26.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237613063636634656531313366313135393035333537386461623234 Dec 16 12:58:26.975000 audit: BPF prog-id=152 op=UNLOAD Dec 16 12:58:26.975000 audit[3426]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3393 pid=3426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:26.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237613063636634656531313366313135393035333537386461623234 Dec 16 12:58:26.975000 audit: BPF prog-id=151 op=UNLOAD Dec 16 12:58:26.975000 audit[3426]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3393 pid=3426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:26.975000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237613063636634656531313366313135393035333537386461623234 Dec 16 12:58:26.976000 audit: BPF prog-id=153 op=LOAD Dec 16 12:58:26.976000 audit[3426]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3393 pid=3426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:26.976000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237613063636634656531313366313135393035333537386461623234 Dec 16 12:58:26.997032 containerd[1605]: time="2025-12-16T12:58:26.996981659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vrqxg,Uid:2f978adc-b014-4c32-bcde-415d20d3d251,Namespace:calico-system,Attempt:0,} returns sandbox id \"27a0ccf4ee113f1159053578dab244fdd9db75a44c56bc30812920e6e5121695\"" Dec 16 12:58:26.997621 kubelet[2806]: E1216 12:58:26.997583 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:27.410000 audit[3459]: NETFILTER_CFG table=filter:115 family=2 entries=22 op=nft_register_rule pid=3459 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:27.410000 audit[3459]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe02a3c250 a2=0 a3=7ffe02a3c23c items=0 ppid=2937 pid=3459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:27.410000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:27.415000 audit[3459]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3459 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:27.415000 audit[3459]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe02a3c250 a2=0 a3=0 items=0 ppid=2937 pid=3459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:27.415000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:28.597051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3969760389.mount: Deactivated successfully. Dec 16 12:58:28.796434 kubelet[2806]: E1216 12:58:28.796358 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4lx5v" podUID="4e3b9dec-c7ec-4533-9b5f-135d8bcc981d" Dec 16 12:58:30.260043 containerd[1605]: time="2025-12-16T12:58:30.259993997Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:30.260867 containerd[1605]: time="2025-12-16T12:58:30.260800342Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35230631" Dec 16 12:58:30.262146 containerd[1605]: time="2025-12-16T12:58:30.262108600Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:30.264867 containerd[1605]: time="2025-12-16T12:58:30.264243082Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:30.264867 containerd[1605]: time="2025-12-16T12:58:30.264777045Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.41653651s" Dec 16 12:58:30.264867 containerd[1605]: time="2025-12-16T12:58:30.264807974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 16 12:58:30.267608 containerd[1605]: time="2025-12-16T12:58:30.266260685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 12:58:30.281890 containerd[1605]: time="2025-12-16T12:58:30.281817123Z" level=info msg="CreateContainer within sandbox \"125ba2e5bcbb8a07788ec037ca7ad6fc2acf91a7b43252e56486401582655ced\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 12:58:30.355498 containerd[1605]: time="2025-12-16T12:58:30.355430130Z" level=info msg="Container ff763770a1cabc8276d9a259f55e31a7dcc433cb6b5862d6581884845ba06812: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:58:30.365081 containerd[1605]: time="2025-12-16T12:58:30.365025384Z" level=info msg="CreateContainer within sandbox \"125ba2e5bcbb8a07788ec037ca7ad6fc2acf91a7b43252e56486401582655ced\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ff763770a1cabc8276d9a259f55e31a7dcc433cb6b5862d6581884845ba06812\"" Dec 16 12:58:30.365676 containerd[1605]: time="2025-12-16T12:58:30.365624501Z" level=info msg="StartContainer for \"ff763770a1cabc8276d9a259f55e31a7dcc433cb6b5862d6581884845ba06812\"" Dec 16 12:58:30.367015 containerd[1605]: time="2025-12-16T12:58:30.366966472Z" level=info msg="connecting to shim ff763770a1cabc8276d9a259f55e31a7dcc433cb6b5862d6581884845ba06812" address="unix:///run/containerd/s/eabf05b1af58b74a2ea32041b6539b6deb43908152328e3c8080bfa3fc57cbbd" protocol=ttrpc version=3 Dec 16 12:58:30.394997 systemd[1]: Started cri-containerd-ff763770a1cabc8276d9a259f55e31a7dcc433cb6b5862d6581884845ba06812.scope - libcontainer container ff763770a1cabc8276d9a259f55e31a7dcc433cb6b5862d6581884845ba06812. Dec 16 12:58:30.411000 audit: BPF prog-id=154 op=LOAD Dec 16 12:58:30.412000 audit: BPF prog-id=155 op=LOAD Dec 16 12:58:30.412000 audit[3470]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=3303 pid=3470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:30.412000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666373633373730613163616263383237366439613235396635356533 Dec 16 12:58:30.412000 audit: BPF prog-id=155 op=UNLOAD Dec 16 12:58:30.412000 audit[3470]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3303 pid=3470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:30.412000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666373633373730613163616263383237366439613235396635356533 Dec 16 12:58:30.412000 audit: BPF prog-id=156 op=LOAD Dec 16 12:58:30.412000 audit[3470]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3303 pid=3470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:30.412000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666373633373730613163616263383237366439613235396635356533 Dec 16 12:58:30.412000 audit: BPF prog-id=157 op=LOAD Dec 16 12:58:30.412000 audit[3470]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3303 pid=3470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:30.412000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666373633373730613163616263383237366439613235396635356533 Dec 16 12:58:30.412000 audit: BPF prog-id=157 op=UNLOAD Dec 16 12:58:30.412000 audit[3470]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3303 pid=3470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:30.412000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666373633373730613163616263383237366439613235396635356533 Dec 16 12:58:30.412000 audit: BPF prog-id=156 op=UNLOAD Dec 16 12:58:30.412000 audit[3470]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3303 pid=3470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:30.412000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666373633373730613163616263383237366439613235396635356533 Dec 16 12:58:30.412000 audit: BPF prog-id=158 op=LOAD Dec 16 12:58:30.412000 audit[3470]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3303 pid=3470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:30.412000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666373633373730613163616263383237366439613235396635356533 Dec 16 12:58:30.452385 containerd[1605]: time="2025-12-16T12:58:30.452328432Z" level=info msg="StartContainer for \"ff763770a1cabc8276d9a259f55e31a7dcc433cb6b5862d6581884845ba06812\" returns successfully" Dec 16 12:58:30.796034 kubelet[2806]: E1216 12:58:30.795970 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4lx5v" podUID="4e3b9dec-c7ec-4533-9b5f-135d8bcc981d" Dec 16 12:58:30.885900 kubelet[2806]: E1216 12:58:30.885859 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:30.895391 kubelet[2806]: I1216 12:58:30.894880 2806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5b988b4f67-gmqwn" podStartSLOduration=1.474267971 podStartE2EDuration="4.89486174s" podCreationTimestamp="2025-12-16 12:58:26 +0000 UTC" firstStartedPulling="2025-12-16 12:58:26.845322187 +0000 UTC m=+23.138460671" lastFinishedPulling="2025-12-16 12:58:30.265915956 +0000 UTC m=+26.559054440" observedRunningTime="2025-12-16 12:58:30.89421245 +0000 UTC m=+27.187350934" watchObservedRunningTime="2025-12-16 12:58:30.89486174 +0000 UTC m=+27.188000244" Dec 16 12:58:30.946449 kubelet[2806]: E1216 12:58:30.946398 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:30.946449 kubelet[2806]: W1216 12:58:30.946426 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:30.947691 kubelet[2806]: E1216 12:58:30.947650 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:30.947911 kubelet[2806]: E1216 12:58:30.947884 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:30.947911 kubelet[2806]: W1216 12:58:30.947896 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:30.947911 kubelet[2806]: E1216 12:58:30.947906 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:30.948116 kubelet[2806]: E1216 12:58:30.948090 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:30.948116 kubelet[2806]: W1216 12:58:30.948102 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:30.948116 kubelet[2806]: E1216 12:58:30.948110 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:30.948367 kubelet[2806]: E1216 12:58:30.948339 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:30.948367 kubelet[2806]: W1216 12:58:30.948352 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:30.948367 kubelet[2806]: E1216 12:58:30.948360 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:30.948576 kubelet[2806]: E1216 12:58:30.948560 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:30.948576 kubelet[2806]: W1216 12:58:30.948570 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:30.948576 kubelet[2806]: E1216 12:58:30.948579 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:30.948853 kubelet[2806]: E1216 12:58:30.948749 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:30.948853 kubelet[2806]: W1216 12:58:30.948759 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:30.948853 kubelet[2806]: E1216 12:58:30.948777 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:30.949066 kubelet[2806]: E1216 12:58:30.948991 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:30.949066 kubelet[2806]: W1216 12:58:30.948998 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:30.949066 kubelet[2806]: E1216 12:58:30.949007 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:30.949356 kubelet[2806]: E1216 12:58:30.949325 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:30.949415 kubelet[2806]: W1216 12:58:30.949352 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:30.949415 kubelet[2806]: E1216 12:58:30.949379 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:30.949678 kubelet[2806]: E1216 12:58:30.949658 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:30.949678 kubelet[2806]: W1216 12:58:30.949672 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:30.949741 kubelet[2806]: E1216 12:58:30.949683 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:30.949974 kubelet[2806]: E1216 12:58:30.949958 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:30.949974 kubelet[2806]: W1216 12:58:30.949970 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:30.950058 kubelet[2806]: E1216 12:58:30.949981 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:30.950238 kubelet[2806]: E1216 12:58:30.950222 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:30.950238 kubelet[2806]: W1216 12:58:30.950235 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:30.950307 kubelet[2806]: E1216 12:58:30.950245 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:30.950466 kubelet[2806]: E1216 12:58:30.950449 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:30.950466 kubelet[2806]: W1216 12:58:30.950461 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:30.950524 kubelet[2806]: E1216 12:58:30.950472 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:30.950675 kubelet[2806]: E1216 12:58:30.950650 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:30.950675 kubelet[2806]: W1216 12:58:30.950661 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:30.950675 kubelet[2806]: E1216 12:58:30.950670 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:30.950865 kubelet[2806]: E1216 12:58:30.950849 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:30.950865 kubelet[2806]: W1216 12:58:30.950862 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:30.950924 kubelet[2806]: E1216 12:58:30.950871 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:30.951113 kubelet[2806]: E1216 12:58:30.951092 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:30.951113 kubelet[2806]: W1216 12:58:30.951109 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:30.951198 kubelet[2806]: E1216 12:58:30.951120 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:30.951682 kubelet[2806]: E1216 12:58:30.951661 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:30.951682 kubelet[2806]: W1216 12:58:30.951673 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:30.951682 kubelet[2806]: E1216 12:58:30.951684 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:30.951932 kubelet[2806]: E1216 12:58:30.951915 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:30.951932 kubelet[2806]: W1216 12:58:30.951928 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:30.951992 kubelet[2806]: E1216 12:58:30.951938 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:30.952152 kubelet[2806]: E1216 12:58:30.952126 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:30.952152 kubelet[2806]: W1216 12:58:30.952139 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:30.952152 kubelet[2806]: E1216 12:58:30.952148 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:30.952464 kubelet[2806]: E1216 12:58:30.952442 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:30.952464 kubelet[2806]: W1216 12:58:30.952457 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:30.952547 kubelet[2806]: E1216 12:58:30.952469 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:30.952683 kubelet[2806]: E1216 12:58:30.952664 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:30.952683 kubelet[2806]: W1216 12:58:30.952676 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:30.952767 kubelet[2806]: E1216 12:58:30.952685 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:30.952882 kubelet[2806]: E1216 12:58:30.952864 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:30.952882 kubelet[2806]: W1216 12:58:30.952876 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:30.952947 kubelet[2806]: E1216 12:58:30.952884 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:30.953107 kubelet[2806]: E1216 12:58:30.953090 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:30.953107 kubelet[2806]: W1216 12:58:30.953102 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:30.953164 kubelet[2806]: E1216 12:58:30.953111 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:30.953419 kubelet[2806]: E1216 12:58:30.953382 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:30.953419 kubelet[2806]: W1216 12:58:30.953404 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:30.953419 kubelet[2806]: E1216 12:58:30.953428 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:30.953690 kubelet[2806]: E1216 12:58:30.953676 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:30.953690 kubelet[2806]: W1216 12:58:30.953686 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:30.953754 kubelet[2806]: E1216 12:58:30.953695 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:30.953921 kubelet[2806]: E1216 12:58:30.953905 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:30.953921 kubelet[2806]: W1216 12:58:30.953916 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:30.953998 kubelet[2806]: E1216 12:58:30.953927 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:30.954150 kubelet[2806]: E1216 12:58:30.954135 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:30.954150 kubelet[2806]: W1216 12:58:30.954145 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:30.954215 kubelet[2806]: E1216 12:58:30.954152 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:30.954352 kubelet[2806]: E1216 12:58:30.954338 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:30.954352 kubelet[2806]: W1216 12:58:30.954347 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:30.954403 kubelet[2806]: E1216 12:58:30.954355 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:30.954593 kubelet[2806]: E1216 12:58:30.954576 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:30.954593 kubelet[2806]: W1216 12:58:30.954589 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:30.954652 kubelet[2806]: E1216 12:58:30.954599 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:30.954924 kubelet[2806]: E1216 12:58:30.954907 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:30.954924 kubelet[2806]: W1216 12:58:30.954919 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:30.954990 kubelet[2806]: E1216 12:58:30.954929 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:30.955098 kubelet[2806]: E1216 12:58:30.955083 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:30.955098 kubelet[2806]: W1216 12:58:30.955093 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:30.955156 kubelet[2806]: E1216 12:58:30.955100 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:30.955311 kubelet[2806]: E1216 12:58:30.955294 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:30.955311 kubelet[2806]: W1216 12:58:30.955305 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:30.955372 kubelet[2806]: E1216 12:58:30.955313 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:30.955659 kubelet[2806]: E1216 12:58:30.955634 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:30.955659 kubelet[2806]: W1216 12:58:30.955651 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:30.955746 kubelet[2806]: E1216 12:58:30.955665 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:30.955906 kubelet[2806]: E1216 12:58:30.955890 2806 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:58:30.955906 kubelet[2806]: W1216 12:58:30.955901 2806 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:58:30.955959 kubelet[2806]: E1216 12:58:30.955909 2806 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:58:31.508748 containerd[1605]: time="2025-12-16T12:58:31.508663518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:31.509627 containerd[1605]: time="2025-12-16T12:58:31.509579069Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 16 12:58:31.510753 containerd[1605]: time="2025-12-16T12:58:31.510714482Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:31.512696 containerd[1605]: time="2025-12-16T12:58:31.512655080Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:31.513197 containerd[1605]: time="2025-12-16T12:58:31.513149850Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.245539338s" Dec 16 12:58:31.513197 containerd[1605]: time="2025-12-16T12:58:31.513189093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 16 12:58:31.516271 containerd[1605]: time="2025-12-16T12:58:31.516239456Z" level=info msg="CreateContainer within sandbox \"27a0ccf4ee113f1159053578dab244fdd9db75a44c56bc30812920e6e5121695\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 12:58:31.524733 containerd[1605]: time="2025-12-16T12:58:31.524689495Z" level=info msg="Container fd3879a54a2767d514e39ef497492c34a609da76cd71c7d5ca9aa5634f80c6ea: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:58:31.531493 containerd[1605]: time="2025-12-16T12:58:31.531454127Z" level=info msg="CreateContainer within sandbox \"27a0ccf4ee113f1159053578dab244fdd9db75a44c56bc30812920e6e5121695\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fd3879a54a2767d514e39ef497492c34a609da76cd71c7d5ca9aa5634f80c6ea\"" Dec 16 12:58:31.531923 containerd[1605]: time="2025-12-16T12:58:31.531905817Z" level=info msg="StartContainer for \"fd3879a54a2767d514e39ef497492c34a609da76cd71c7d5ca9aa5634f80c6ea\"" Dec 16 12:58:31.533245 containerd[1605]: time="2025-12-16T12:58:31.533220497Z" level=info msg="connecting to shim fd3879a54a2767d514e39ef497492c34a609da76cd71c7d5ca9aa5634f80c6ea" address="unix:///run/containerd/s/157f9e5b4af118944988a8cd012c7837aa74cf01d6ad26679c715693005e6b9e" protocol=ttrpc version=3 Dec 16 12:58:31.554019 systemd[1]: Started cri-containerd-fd3879a54a2767d514e39ef497492c34a609da76cd71c7d5ca9aa5634f80c6ea.scope - libcontainer container fd3879a54a2767d514e39ef497492c34a609da76cd71c7d5ca9aa5634f80c6ea. Dec 16 12:58:31.628000 audit: BPF prog-id=159 op=LOAD Dec 16 12:58:31.630708 kernel: kauditd_printk_skb: 74 callbacks suppressed Dec 16 12:58:31.630886 kernel: audit: type=1334 audit(1765889911.628:537): prog-id=159 op=LOAD Dec 16 12:58:31.628000 audit[3547]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3393 pid=3547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:31.638513 kernel: audit: type=1300 audit(1765889911.628:537): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3393 pid=3547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:31.638591 kernel: audit: type=1327 audit(1765889911.628:537): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664333837396135346132373637643531346533396566343937343932 Dec 16 12:58:31.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664333837396135346132373637643531346533396566343937343932 Dec 16 12:58:31.646521 kernel: audit: type=1334 audit(1765889911.628:538): prog-id=160 op=LOAD Dec 16 12:58:31.628000 audit: BPF prog-id=160 op=LOAD Dec 16 12:58:31.653685 kernel: audit: type=1300 audit(1765889911.628:538): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3393 pid=3547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:31.628000 audit[3547]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3393 pid=3547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:31.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664333837396135346132373637643531346533396566343937343932 Dec 16 12:58:31.628000 audit: BPF prog-id=160 op=UNLOAD Dec 16 12:58:31.660978 kernel: audit: type=1327 audit(1765889911.628:538): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664333837396135346132373637643531346533396566343937343932 Dec 16 12:58:31.661027 kernel: audit: type=1334 audit(1765889911.628:539): prog-id=160 op=UNLOAD Dec 16 12:58:31.661061 kernel: audit: type=1300 audit(1765889911.628:539): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3393 pid=3547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:31.628000 audit[3547]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3393 pid=3547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:31.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664333837396135346132373637643531346533396566343937343932 Dec 16 12:58:31.666789 containerd[1605]: time="2025-12-16T12:58:31.666718965Z" level=info msg="StartContainer for \"fd3879a54a2767d514e39ef497492c34a609da76cd71c7d5ca9aa5634f80c6ea\" returns successfully" Dec 16 12:58:31.672174 kernel: audit: type=1327 audit(1765889911.628:539): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664333837396135346132373637643531346533396566343937343932 Dec 16 12:58:31.672255 kernel: audit: type=1334 audit(1765889911.628:540): prog-id=159 op=UNLOAD Dec 16 12:58:31.628000 audit: BPF prog-id=159 op=UNLOAD Dec 16 12:58:31.628000 audit[3547]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3393 pid=3547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:31.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664333837396135346132373637643531346533396566343937343932 Dec 16 12:58:31.628000 audit: BPF prog-id=161 op=LOAD Dec 16 12:58:31.628000 audit[3547]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3393 pid=3547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:31.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664333837396135346132373637643531346533396566343937343932 Dec 16 12:58:31.680990 systemd[1]: cri-containerd-fd3879a54a2767d514e39ef497492c34a609da76cd71c7d5ca9aa5634f80c6ea.scope: Deactivated successfully. Dec 16 12:58:31.683546 containerd[1605]: time="2025-12-16T12:58:31.683508967Z" level=info msg="received container exit event container_id:\"fd3879a54a2767d514e39ef497492c34a609da76cd71c7d5ca9aa5634f80c6ea\" id:\"fd3879a54a2767d514e39ef497492c34a609da76cd71c7d5ca9aa5634f80c6ea\" pid:3560 exited_at:{seconds:1765889911 nanos:683290617}" Dec 16 12:58:31.690000 audit: BPF prog-id=161 op=UNLOAD Dec 16 12:58:31.706461 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd3879a54a2767d514e39ef497492c34a609da76cd71c7d5ca9aa5634f80c6ea-rootfs.mount: Deactivated successfully. Dec 16 12:58:31.887631 kubelet[2806]: I1216 12:58:31.887460 2806 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:58:31.888219 kubelet[2806]: E1216 12:58:31.887901 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:31.888219 kubelet[2806]: E1216 12:58:31.888207 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:32.795997 kubelet[2806]: E1216 12:58:32.795899 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4lx5v" podUID="4e3b9dec-c7ec-4533-9b5f-135d8bcc981d" Dec 16 12:58:32.891354 kubelet[2806]: E1216 12:58:32.891315 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:32.892040 containerd[1605]: time="2025-12-16T12:58:32.891962864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 12:58:34.795980 kubelet[2806]: E1216 12:58:34.795923 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4lx5v" podUID="4e3b9dec-c7ec-4533-9b5f-135d8bcc981d" Dec 16 12:58:35.295302 containerd[1605]: time="2025-12-16T12:58:35.295239049Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:35.296054 containerd[1605]: time="2025-12-16T12:58:35.296014175Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Dec 16 12:58:35.297085 containerd[1605]: time="2025-12-16T12:58:35.297046554Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:35.299109 containerd[1605]: time="2025-12-16T12:58:35.299060647Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:35.299626 containerd[1605]: time="2025-12-16T12:58:35.299589711Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.407589897s" Dec 16 12:58:35.299626 containerd[1605]: time="2025-12-16T12:58:35.299616882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 16 12:58:35.303478 containerd[1605]: time="2025-12-16T12:58:35.303387756Z" level=info msg="CreateContainer within sandbox \"27a0ccf4ee113f1159053578dab244fdd9db75a44c56bc30812920e6e5121695\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 12:58:35.312142 containerd[1605]: time="2025-12-16T12:58:35.312096674Z" level=info msg="Container 2a3cf1895d6c41a50e9211bc897fc8efffe1a369d123776a101441c6f0dc1540: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:58:35.319793 containerd[1605]: time="2025-12-16T12:58:35.319760429Z" level=info msg="CreateContainer within sandbox \"27a0ccf4ee113f1159053578dab244fdd9db75a44c56bc30812920e6e5121695\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2a3cf1895d6c41a50e9211bc897fc8efffe1a369d123776a101441c6f0dc1540\"" Dec 16 12:58:35.320224 containerd[1605]: time="2025-12-16T12:58:35.320188654Z" level=info msg="StartContainer for \"2a3cf1895d6c41a50e9211bc897fc8efffe1a369d123776a101441c6f0dc1540\"" Dec 16 12:58:35.321518 containerd[1605]: time="2025-12-16T12:58:35.321491200Z" level=info msg="connecting to shim 2a3cf1895d6c41a50e9211bc897fc8efffe1a369d123776a101441c6f0dc1540" address="unix:///run/containerd/s/157f9e5b4af118944988a8cd012c7837aa74cf01d6ad26679c715693005e6b9e" protocol=ttrpc version=3 Dec 16 12:58:35.343006 systemd[1]: Started cri-containerd-2a3cf1895d6c41a50e9211bc897fc8efffe1a369d123776a101441c6f0dc1540.scope - libcontainer container 2a3cf1895d6c41a50e9211bc897fc8efffe1a369d123776a101441c6f0dc1540. Dec 16 12:58:35.409000 audit: BPF prog-id=162 op=LOAD Dec 16 12:58:35.409000 audit[3607]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3393 pid=3607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:35.409000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261336366313839356436633431613530653932313162633839376663 Dec 16 12:58:35.409000 audit: BPF prog-id=163 op=LOAD Dec 16 12:58:35.409000 audit[3607]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3393 pid=3607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:35.409000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261336366313839356436633431613530653932313162633839376663 Dec 16 12:58:35.409000 audit: BPF prog-id=163 op=UNLOAD Dec 16 12:58:35.409000 audit[3607]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3393 pid=3607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:35.409000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261336366313839356436633431613530653932313162633839376663 Dec 16 12:58:35.409000 audit: BPF prog-id=162 op=UNLOAD Dec 16 12:58:35.409000 audit[3607]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3393 pid=3607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:35.409000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261336366313839356436633431613530653932313162633839376663 Dec 16 12:58:35.409000 audit: BPF prog-id=164 op=LOAD Dec 16 12:58:35.409000 audit[3607]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3393 pid=3607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:35.409000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261336366313839356436633431613530653932313162633839376663 Dec 16 12:58:35.430992 containerd[1605]: time="2025-12-16T12:58:35.430950025Z" level=info msg="StartContainer for \"2a3cf1895d6c41a50e9211bc897fc8efffe1a369d123776a101441c6f0dc1540\" returns successfully" Dec 16 12:58:35.900097 kubelet[2806]: E1216 12:58:35.900047 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:36.319977 systemd[1]: cri-containerd-2a3cf1895d6c41a50e9211bc897fc8efffe1a369d123776a101441c6f0dc1540.scope: Deactivated successfully. Dec 16 12:58:36.320365 systemd[1]: cri-containerd-2a3cf1895d6c41a50e9211bc897fc8efffe1a369d123776a101441c6f0dc1540.scope: Consumed 604ms CPU time, 168M memory peak, 3.9M read from disk, 171.3M written to disk. Dec 16 12:58:36.323381 containerd[1605]: time="2025-12-16T12:58:36.323345772Z" level=info msg="received container exit event container_id:\"2a3cf1895d6c41a50e9211bc897fc8efffe1a369d123776a101441c6f0dc1540\" id:\"2a3cf1895d6c41a50e9211bc897fc8efffe1a369d123776a101441c6f0dc1540\" pid:3620 exited_at:{seconds:1765889916 nanos:320713438}" Dec 16 12:58:36.324000 audit: BPF prog-id=164 op=UNLOAD Dec 16 12:58:36.350440 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a3cf1895d6c41a50e9211bc897fc8efffe1a369d123776a101441c6f0dc1540-rootfs.mount: Deactivated successfully. Dec 16 12:58:36.352586 kubelet[2806]: I1216 12:58:36.352395 2806 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 12:58:36.670982 systemd[1]: Created slice kubepods-besteffort-pod1c71642e_2c37_4a5d_aec0_8a0d6c89217c.slice - libcontainer container kubepods-besteffort-pod1c71642e_2c37_4a5d_aec0_8a0d6c89217c.slice. Dec 16 12:58:36.679371 systemd[1]: Created slice kubepods-burstable-podf185fc0f_162e_4772_849a_712bab097239.slice - libcontainer container kubepods-burstable-podf185fc0f_162e_4772_849a_712bab097239.slice. Dec 16 12:58:36.688710 systemd[1]: Created slice kubepods-besteffort-podae64ff47_24ee_417f_a174_8a680294cf45.slice - libcontainer container kubepods-besteffort-podae64ff47_24ee_417f_a174_8a680294cf45.slice. Dec 16 12:58:36.696451 systemd[1]: Created slice kubepods-besteffort-podd3d0f018_dbb1_4af4_8317_c55456bbf69e.slice - libcontainer container kubepods-besteffort-podd3d0f018_dbb1_4af4_8317_c55456bbf69e.slice. Dec 16 12:58:36.697399 kubelet[2806]: I1216 12:58:36.696460 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e4c04278-05a3-4964-9b08-f5b05bcddf6d-calico-apiserver-certs\") pod \"calico-apiserver-7767f7c484-w77xw\" (UID: \"e4c04278-05a3-4964-9b08-f5b05bcddf6d\") " pod="calico-apiserver/calico-apiserver-7767f7c484-w77xw" Dec 16 12:58:36.697498 kubelet[2806]: I1216 12:58:36.697409 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9wsx\" (UniqueName: \"kubernetes.io/projected/e4c04278-05a3-4964-9b08-f5b05bcddf6d-kube-api-access-c9wsx\") pod \"calico-apiserver-7767f7c484-w77xw\" (UID: \"e4c04278-05a3-4964-9b08-f5b05bcddf6d\") " pod="calico-apiserver/calico-apiserver-7767f7c484-w77xw" Dec 16 12:58:36.703096 systemd[1]: Created slice kubepods-besteffort-pode4c04278_05a3_4964_9b08_f5b05bcddf6d.slice - libcontainer container kubepods-besteffort-pode4c04278_05a3_4964_9b08_f5b05bcddf6d.slice. Dec 16 12:58:36.709995 systemd[1]: Created slice kubepods-besteffort-pode710a919_c171_452f_a8e0_220cab9661a8.slice - libcontainer container kubepods-besteffort-pode710a919_c171_452f_a8e0_220cab9661a8.slice. Dec 16 12:58:36.716635 systemd[1]: Created slice kubepods-burstable-podab274844_97cc_406d_90f8_4c834c435e0c.slice - libcontainer container kubepods-burstable-podab274844_97cc_406d_90f8_4c834c435e0c.slice. Dec 16 12:58:36.724043 systemd[1]: Created slice kubepods-besteffort-pod08308b23_48ef_4e90_90c4_1f977d90ef17.slice - libcontainer container kubepods-besteffort-pod08308b23_48ef_4e90_90c4_1f977d90ef17.slice. Dec 16 12:58:36.797732 kubelet[2806]: I1216 12:58:36.797695 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08308b23-48ef-4e90-90c4-1f977d90ef17-whisker-ca-bundle\") pod \"whisker-6cd9cbf7b-47jss\" (UID: \"08308b23-48ef-4e90-90c4-1f977d90ef17\") " pod="calico-system/whisker-6cd9cbf7b-47jss" Dec 16 12:58:36.797879 kubelet[2806]: I1216 12:58:36.797738 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e710a919-c171-452f-a8e0-220cab9661a8-config\") pod \"goldmane-666569f655-r5stm\" (UID: \"e710a919-c171-452f-a8e0-220cab9661a8\") " pod="calico-system/goldmane-666569f655-r5stm" Dec 16 12:58:36.797879 kubelet[2806]: I1216 12:58:36.797758 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9mcc\" (UniqueName: \"kubernetes.io/projected/1c71642e-2c37-4a5d-aec0-8a0d6c89217c-kube-api-access-j9mcc\") pod \"calico-kube-controllers-77fbcbddcb-x6lxz\" (UID: \"1c71642e-2c37-4a5d-aec0-8a0d6c89217c\") " pod="calico-system/calico-kube-controllers-77fbcbddcb-x6lxz" Dec 16 12:58:36.797879 kubelet[2806]: I1216 12:58:36.797776 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e710a919-c171-452f-a8e0-220cab9661a8-goldmane-ca-bundle\") pod \"goldmane-666569f655-r5stm\" (UID: \"e710a919-c171-452f-a8e0-220cab9661a8\") " pod="calico-system/goldmane-666569f655-r5stm" Dec 16 12:58:36.797879 kubelet[2806]: I1216 12:58:36.797793 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d3d0f018-dbb1-4af4-8317-c55456bbf69e-calico-apiserver-certs\") pod \"calico-apiserver-7767f7c484-gsgtz\" (UID: \"d3d0f018-dbb1-4af4-8317-c55456bbf69e\") " pod="calico-apiserver/calico-apiserver-7767f7c484-gsgtz" Dec 16 12:58:36.797879 kubelet[2806]: I1216 12:58:36.797806 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vggkw\" (UniqueName: \"kubernetes.io/projected/e710a919-c171-452f-a8e0-220cab9661a8-kube-api-access-vggkw\") pod \"goldmane-666569f655-r5stm\" (UID: \"e710a919-c171-452f-a8e0-220cab9661a8\") " pod="calico-system/goldmane-666569f655-r5stm" Dec 16 12:58:36.798113 kubelet[2806]: I1216 12:58:36.797839 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab274844-97cc-406d-90f8-4c834c435e0c-config-volume\") pod \"coredns-674b8bbfcf-fhwjm\" (UID: \"ab274844-97cc-406d-90f8-4c834c435e0c\") " pod="kube-system/coredns-674b8bbfcf-fhwjm" Dec 16 12:58:36.798113 kubelet[2806]: I1216 12:58:36.797861 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f185fc0f-162e-4772-849a-712bab097239-config-volume\") pod \"coredns-674b8bbfcf-wwmpg\" (UID: \"f185fc0f-162e-4772-849a-712bab097239\") " pod="kube-system/coredns-674b8bbfcf-wwmpg" Dec 16 12:58:36.798113 kubelet[2806]: I1216 12:58:36.797880 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v89zx\" (UniqueName: \"kubernetes.io/projected/f185fc0f-162e-4772-849a-712bab097239-kube-api-access-v89zx\") pod \"coredns-674b8bbfcf-wwmpg\" (UID: \"f185fc0f-162e-4772-849a-712bab097239\") " pod="kube-system/coredns-674b8bbfcf-wwmpg" Dec 16 12:58:36.798113 kubelet[2806]: I1216 12:58:36.797894 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e710a919-c171-452f-a8e0-220cab9661a8-goldmane-key-pair\") pod \"goldmane-666569f655-r5stm\" (UID: \"e710a919-c171-452f-a8e0-220cab9661a8\") " pod="calico-system/goldmane-666569f655-r5stm" Dec 16 12:58:36.798113 kubelet[2806]: I1216 12:58:36.797908 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv5ck\" (UniqueName: \"kubernetes.io/projected/ae64ff47-24ee-417f-a174-8a680294cf45-kube-api-access-rv5ck\") pod \"calico-apiserver-5c7fb6bf4b-pj7kl\" (UID: \"ae64ff47-24ee-417f-a174-8a680294cf45\") " pod="calico-apiserver/calico-apiserver-5c7fb6bf4b-pj7kl" Dec 16 12:58:36.798284 kubelet[2806]: I1216 12:58:36.797956 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c71642e-2c37-4a5d-aec0-8a0d6c89217c-tigera-ca-bundle\") pod \"calico-kube-controllers-77fbcbddcb-x6lxz\" (UID: \"1c71642e-2c37-4a5d-aec0-8a0d6c89217c\") " pod="calico-system/calico-kube-controllers-77fbcbddcb-x6lxz" Dec 16 12:58:36.798284 kubelet[2806]: I1216 12:58:36.797989 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wv5p\" (UniqueName: \"kubernetes.io/projected/d3d0f018-dbb1-4af4-8317-c55456bbf69e-kube-api-access-2wv5p\") pod \"calico-apiserver-7767f7c484-gsgtz\" (UID: \"d3d0f018-dbb1-4af4-8317-c55456bbf69e\") " pod="calico-apiserver/calico-apiserver-7767f7c484-gsgtz" Dec 16 12:58:36.798284 kubelet[2806]: I1216 12:58:36.798011 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/08308b23-48ef-4e90-90c4-1f977d90ef17-whisker-backend-key-pair\") pod \"whisker-6cd9cbf7b-47jss\" (UID: \"08308b23-48ef-4e90-90c4-1f977d90ef17\") " pod="calico-system/whisker-6cd9cbf7b-47jss" Dec 16 12:58:36.798284 kubelet[2806]: I1216 12:58:36.798035 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52scr\" (UniqueName: \"kubernetes.io/projected/ab274844-97cc-406d-90f8-4c834c435e0c-kube-api-access-52scr\") pod \"coredns-674b8bbfcf-fhwjm\" (UID: \"ab274844-97cc-406d-90f8-4c834c435e0c\") " pod="kube-system/coredns-674b8bbfcf-fhwjm" Dec 16 12:58:36.798284 kubelet[2806]: I1216 12:58:36.798071 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48zvw\" (UniqueName: \"kubernetes.io/projected/08308b23-48ef-4e90-90c4-1f977d90ef17-kube-api-access-48zvw\") pod \"whisker-6cd9cbf7b-47jss\" (UID: \"08308b23-48ef-4e90-90c4-1f977d90ef17\") " pod="calico-system/whisker-6cd9cbf7b-47jss" Dec 16 12:58:36.798458 kubelet[2806]: I1216 12:58:36.798096 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ae64ff47-24ee-417f-a174-8a680294cf45-calico-apiserver-certs\") pod \"calico-apiserver-5c7fb6bf4b-pj7kl\" (UID: \"ae64ff47-24ee-417f-a174-8a680294cf45\") " pod="calico-apiserver/calico-apiserver-5c7fb6bf4b-pj7kl" Dec 16 12:58:36.812259 systemd[1]: Created slice kubepods-besteffort-pod4e3b9dec_c7ec_4533_9b5f_135d8bcc981d.slice - libcontainer container kubepods-besteffort-pod4e3b9dec_c7ec_4533_9b5f_135d8bcc981d.slice. Dec 16 12:58:36.814686 containerd[1605]: time="2025-12-16T12:58:36.814653446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4lx5v,Uid:4e3b9dec-c7ec-4533-9b5f-135d8bcc981d,Namespace:calico-system,Attempt:0,}" Dec 16 12:58:36.923378 kubelet[2806]: E1216 12:58:36.922917 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:36.925600 containerd[1605]: time="2025-12-16T12:58:36.925559438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 12:58:36.941910 containerd[1605]: time="2025-12-16T12:58:36.941855345Z" level=error msg="Failed to destroy network for sandbox \"8371cfc367c7148a9d8606f3b84b6d6a4fbd5928d054219dc3512b159b9b31f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:36.944035 containerd[1605]: time="2025-12-16T12:58:36.943991166Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4lx5v,Uid:4e3b9dec-c7ec-4533-9b5f-135d8bcc981d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8371cfc367c7148a9d8606f3b84b6d6a4fbd5928d054219dc3512b159b9b31f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:36.944237 kubelet[2806]: E1216 12:58:36.944204 2806 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8371cfc367c7148a9d8606f3b84b6d6a4fbd5928d054219dc3512b159b9b31f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:36.944279 kubelet[2806]: E1216 12:58:36.944262 2806 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8371cfc367c7148a9d8606f3b84b6d6a4fbd5928d054219dc3512b159b9b31f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4lx5v" Dec 16 12:58:36.944307 kubelet[2806]: E1216 12:58:36.944281 2806 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8371cfc367c7148a9d8606f3b84b6d6a4fbd5928d054219dc3512b159b9b31f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4lx5v" Dec 16 12:58:36.944351 kubelet[2806]: E1216 12:58:36.944325 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4lx5v_calico-system(4e3b9dec-c7ec-4533-9b5f-135d8bcc981d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4lx5v_calico-system(4e3b9dec-c7ec-4533-9b5f-135d8bcc981d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8371cfc367c7148a9d8606f3b84b6d6a4fbd5928d054219dc3512b159b9b31f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4lx5v" podUID="4e3b9dec-c7ec-4533-9b5f-135d8bcc981d" Dec 16 12:58:36.977136 containerd[1605]: time="2025-12-16T12:58:36.977096663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77fbcbddcb-x6lxz,Uid:1c71642e-2c37-4a5d-aec0-8a0d6c89217c,Namespace:calico-system,Attempt:0,}" Dec 16 12:58:36.985446 kubelet[2806]: E1216 12:58:36.985408 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:36.985898 containerd[1605]: time="2025-12-16T12:58:36.985849734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wwmpg,Uid:f185fc0f-162e-4772-849a-712bab097239,Namespace:kube-system,Attempt:0,}" Dec 16 12:58:36.994699 containerd[1605]: time="2025-12-16T12:58:36.994500732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c7fb6bf4b-pj7kl,Uid:ae64ff47-24ee-417f-a174-8a680294cf45,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:58:37.003324 containerd[1605]: time="2025-12-16T12:58:37.002467394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7767f7c484-gsgtz,Uid:d3d0f018-dbb1-4af4-8317-c55456bbf69e,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:58:37.006439 containerd[1605]: time="2025-12-16T12:58:37.006358562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7767f7c484-w77xw,Uid:e4c04278-05a3-4964-9b08-f5b05bcddf6d,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:58:37.015858 containerd[1605]: time="2025-12-16T12:58:37.015803600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-r5stm,Uid:e710a919-c171-452f-a8e0-220cab9661a8,Namespace:calico-system,Attempt:0,}" Dec 16 12:58:37.021219 kubelet[2806]: E1216 12:58:37.021183 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:37.029839 containerd[1605]: time="2025-12-16T12:58:37.029673828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cd9cbf7b-47jss,Uid:08308b23-48ef-4e90-90c4-1f977d90ef17,Namespace:calico-system,Attempt:0,}" Dec 16 12:58:37.030419 containerd[1605]: time="2025-12-16T12:58:37.030400914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fhwjm,Uid:ab274844-97cc-406d-90f8-4c834c435e0c,Namespace:kube-system,Attempt:0,}" Dec 16 12:58:37.102564 containerd[1605]: time="2025-12-16T12:58:37.102509333Z" level=error msg="Failed to destroy network for sandbox \"88f3873fbcec18e853a0ae672dc3b498b20667b9e5cf841e345d304e08513d3d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:37.107097 containerd[1605]: time="2025-12-16T12:58:37.107003373Z" level=error msg="Failed to destroy network for sandbox \"0bd49113876a02e0b90e8e2f7c55450d2e4f3b71d9dbea244ee5c507542421a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:37.110697 containerd[1605]: time="2025-12-16T12:58:37.110342975Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77fbcbddcb-x6lxz,Uid:1c71642e-2c37-4a5d-aec0-8a0d6c89217c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"88f3873fbcec18e853a0ae672dc3b498b20667b9e5cf841e345d304e08513d3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:37.113210 containerd[1605]: time="2025-12-16T12:58:37.113011226Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wwmpg,Uid:f185fc0f-162e-4772-849a-712bab097239,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bd49113876a02e0b90e8e2f7c55450d2e4f3b71d9dbea244ee5c507542421a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:37.114777 kubelet[2806]: E1216 12:58:37.114697 2806 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bd49113876a02e0b90e8e2f7c55450d2e4f3b71d9dbea244ee5c507542421a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:37.114777 kubelet[2806]: E1216 12:58:37.114759 2806 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bd49113876a02e0b90e8e2f7c55450d2e4f3b71d9dbea244ee5c507542421a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-wwmpg" Dec 16 12:58:37.114885 kubelet[2806]: E1216 12:58:37.114780 2806 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bd49113876a02e0b90e8e2f7c55450d2e4f3b71d9dbea244ee5c507542421a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-wwmpg" Dec 16 12:58:37.114885 kubelet[2806]: E1216 12:58:37.114866 2806 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88f3873fbcec18e853a0ae672dc3b498b20667b9e5cf841e345d304e08513d3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:37.114972 kubelet[2806]: E1216 12:58:37.114887 2806 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88f3873fbcec18e853a0ae672dc3b498b20667b9e5cf841e345d304e08513d3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77fbcbddcb-x6lxz" Dec 16 12:58:37.114972 kubelet[2806]: E1216 12:58:37.114900 2806 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88f3873fbcec18e853a0ae672dc3b498b20667b9e5cf841e345d304e08513d3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77fbcbddcb-x6lxz" Dec 16 12:58:37.114972 kubelet[2806]: E1216 12:58:37.114922 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-77fbcbddcb-x6lxz_calico-system(1c71642e-2c37-4a5d-aec0-8a0d6c89217c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-77fbcbddcb-x6lxz_calico-system(1c71642e-2c37-4a5d-aec0-8a0d6c89217c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"88f3873fbcec18e853a0ae672dc3b498b20667b9e5cf841e345d304e08513d3d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-77fbcbddcb-x6lxz" podUID="1c71642e-2c37-4a5d-aec0-8a0d6c89217c" Dec 16 12:58:37.115796 kubelet[2806]: E1216 12:58:37.114971 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-wwmpg_kube-system(f185fc0f-162e-4772-849a-712bab097239)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-wwmpg_kube-system(f185fc0f-162e-4772-849a-712bab097239)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0bd49113876a02e0b90e8e2f7c55450d2e4f3b71d9dbea244ee5c507542421a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-wwmpg" podUID="f185fc0f-162e-4772-849a-712bab097239" Dec 16 12:58:37.121338 containerd[1605]: time="2025-12-16T12:58:37.121283201Z" level=error msg="Failed to destroy network for sandbox \"3b4399a85aece91020fa48452825eaab1ff8c8f9abdd20a73049c636f56860fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:37.133153 containerd[1605]: time="2025-12-16T12:58:37.133090665Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c7fb6bf4b-pj7kl,Uid:ae64ff47-24ee-417f-a174-8a680294cf45,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b4399a85aece91020fa48452825eaab1ff8c8f9abdd20a73049c636f56860fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:37.134745 kubelet[2806]: E1216 12:58:37.133813 2806 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b4399a85aece91020fa48452825eaab1ff8c8f9abdd20a73049c636f56860fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:37.134745 kubelet[2806]: E1216 12:58:37.133897 2806 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b4399a85aece91020fa48452825eaab1ff8c8f9abdd20a73049c636f56860fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c7fb6bf4b-pj7kl" Dec 16 12:58:37.134745 kubelet[2806]: E1216 12:58:37.133917 2806 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b4399a85aece91020fa48452825eaab1ff8c8f9abdd20a73049c636f56860fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c7fb6bf4b-pj7kl" Dec 16 12:58:37.134902 kubelet[2806]: E1216 12:58:37.133959 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c7fb6bf4b-pj7kl_calico-apiserver(ae64ff47-24ee-417f-a174-8a680294cf45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c7fb6bf4b-pj7kl_calico-apiserver(ae64ff47-24ee-417f-a174-8a680294cf45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b4399a85aece91020fa48452825eaab1ff8c8f9abdd20a73049c636f56860fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c7fb6bf4b-pj7kl" podUID="ae64ff47-24ee-417f-a174-8a680294cf45" Dec 16 12:58:37.165409 containerd[1605]: time="2025-12-16T12:58:37.165338968Z" level=error msg="Failed to destroy network for sandbox \"9243539307ada02688066c8f00be6d4ce324cb159afec77bb745078def621e68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:37.165923 containerd[1605]: time="2025-12-16T12:58:37.165873862Z" level=error msg="Failed to destroy network for sandbox \"18673d5fbf3d087e92267c0eb9cac7158725769e70d0bfe91c43cf00d51e9e21\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:37.166991 containerd[1605]: time="2025-12-16T12:58:37.166962226Z" level=error msg="Failed to destroy network for sandbox \"3275409bcb5663d24d7b44bce3b2a0739ebef1a2a1e223e620f8eb7d18f270d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:37.167205 containerd[1605]: time="2025-12-16T12:58:37.166990569Z" level=error msg="Failed to destroy network for sandbox \"357bbbe1fe3bb98c81ead94e833a30f384329a44e47f2eb98c25ef0ffabf7fdb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:37.169190 containerd[1605]: time="2025-12-16T12:58:37.169153692Z" level=error msg="Failed to destroy network for sandbox \"3e66b54be61c2ac546ff3de6f7a51f0313dcb45bd7665700f898721635561ce1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:37.170379 containerd[1605]: time="2025-12-16T12:58:37.170341352Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cd9cbf7b-47jss,Uid:08308b23-48ef-4e90-90c4-1f977d90ef17,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"18673d5fbf3d087e92267c0eb9cac7158725769e70d0bfe91c43cf00d51e9e21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:37.170618 kubelet[2806]: E1216 12:58:37.170577 2806 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18673d5fbf3d087e92267c0eb9cac7158725769e70d0bfe91c43cf00d51e9e21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:37.170675 kubelet[2806]: E1216 12:58:37.170641 2806 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18673d5fbf3d087e92267c0eb9cac7158725769e70d0bfe91c43cf00d51e9e21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6cd9cbf7b-47jss" Dec 16 12:58:37.170675 kubelet[2806]: E1216 12:58:37.170663 2806 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18673d5fbf3d087e92267c0eb9cac7158725769e70d0bfe91c43cf00d51e9e21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6cd9cbf7b-47jss" Dec 16 12:58:37.170732 kubelet[2806]: E1216 12:58:37.170711 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6cd9cbf7b-47jss_calico-system(08308b23-48ef-4e90-90c4-1f977d90ef17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6cd9cbf7b-47jss_calico-system(08308b23-48ef-4e90-90c4-1f977d90ef17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"18673d5fbf3d087e92267c0eb9cac7158725769e70d0bfe91c43cf00d51e9e21\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6cd9cbf7b-47jss" podUID="08308b23-48ef-4e90-90c4-1f977d90ef17" Dec 16 12:58:37.173159 containerd[1605]: time="2025-12-16T12:58:37.173099663Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7767f7c484-gsgtz,Uid:d3d0f018-dbb1-4af4-8317-c55456bbf69e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9243539307ada02688066c8f00be6d4ce324cb159afec77bb745078def621e68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:37.173692 kubelet[2806]: E1216 12:58:37.173572 2806 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9243539307ada02688066c8f00be6d4ce324cb159afec77bb745078def621e68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:37.173692 kubelet[2806]: E1216 12:58:37.173605 2806 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9243539307ada02688066c8f00be6d4ce324cb159afec77bb745078def621e68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7767f7c484-gsgtz" Dec 16 12:58:37.173692 kubelet[2806]: E1216 12:58:37.173622 2806 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9243539307ada02688066c8f00be6d4ce324cb159afec77bb745078def621e68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7767f7c484-gsgtz" Dec 16 12:58:37.173892 kubelet[2806]: E1216 12:58:37.173652 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7767f7c484-gsgtz_calico-apiserver(d3d0f018-dbb1-4af4-8317-c55456bbf69e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7767f7c484-gsgtz_calico-apiserver(d3d0f018-dbb1-4af4-8317-c55456bbf69e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9243539307ada02688066c8f00be6d4ce324cb159afec77bb745078def621e68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7767f7c484-gsgtz" podUID="d3d0f018-dbb1-4af4-8317-c55456bbf69e" Dec 16 12:58:37.175499 containerd[1605]: time="2025-12-16T12:58:37.175457851Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-r5stm,Uid:e710a919-c171-452f-a8e0-220cab9661a8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3275409bcb5663d24d7b44bce3b2a0739ebef1a2a1e223e620f8eb7d18f270d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:37.175652 kubelet[2806]: E1216 12:58:37.175612 2806 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3275409bcb5663d24d7b44bce3b2a0739ebef1a2a1e223e620f8eb7d18f270d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:37.175705 kubelet[2806]: E1216 12:58:37.175652 2806 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3275409bcb5663d24d7b44bce3b2a0739ebef1a2a1e223e620f8eb7d18f270d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-r5stm" Dec 16 12:58:37.175705 kubelet[2806]: E1216 12:58:37.175668 2806 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3275409bcb5663d24d7b44bce3b2a0739ebef1a2a1e223e620f8eb7d18f270d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-r5stm" Dec 16 12:58:37.175765 kubelet[2806]: E1216 12:58:37.175700 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-r5stm_calico-system(e710a919-c171-452f-a8e0-220cab9661a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-r5stm_calico-system(e710a919-c171-452f-a8e0-220cab9661a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3275409bcb5663d24d7b44bce3b2a0739ebef1a2a1e223e620f8eb7d18f270d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-r5stm" podUID="e710a919-c171-452f-a8e0-220cab9661a8" Dec 16 12:58:37.176443 containerd[1605]: time="2025-12-16T12:58:37.176371757Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7767f7c484-w77xw,Uid:e4c04278-05a3-4964-9b08-f5b05bcddf6d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"357bbbe1fe3bb98c81ead94e833a30f384329a44e47f2eb98c25ef0ffabf7fdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:37.176582 kubelet[2806]: E1216 12:58:37.176558 2806 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"357bbbe1fe3bb98c81ead94e833a30f384329a44e47f2eb98c25ef0ffabf7fdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:37.176627 kubelet[2806]: E1216 12:58:37.176608 2806 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"357bbbe1fe3bb98c81ead94e833a30f384329a44e47f2eb98c25ef0ffabf7fdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7767f7c484-w77xw" Dec 16 12:58:37.176653 kubelet[2806]: E1216 12:58:37.176631 2806 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"357bbbe1fe3bb98c81ead94e833a30f384329a44e47f2eb98c25ef0ffabf7fdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7767f7c484-w77xw" Dec 16 12:58:37.176703 kubelet[2806]: E1216 12:58:37.176675 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7767f7c484-w77xw_calico-apiserver(e4c04278-05a3-4964-9b08-f5b05bcddf6d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7767f7c484-w77xw_calico-apiserver(e4c04278-05a3-4964-9b08-f5b05bcddf6d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"357bbbe1fe3bb98c81ead94e833a30f384329a44e47f2eb98c25ef0ffabf7fdb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7767f7c484-w77xw" podUID="e4c04278-05a3-4964-9b08-f5b05bcddf6d" Dec 16 12:58:37.178285 containerd[1605]: time="2025-12-16T12:58:37.178247771Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fhwjm,Uid:ab274844-97cc-406d-90f8-4c834c435e0c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e66b54be61c2ac546ff3de6f7a51f0313dcb45bd7665700f898721635561ce1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:37.178427 kubelet[2806]: E1216 12:58:37.178386 2806 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e66b54be61c2ac546ff3de6f7a51f0313dcb45bd7665700f898721635561ce1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:58:37.178427 kubelet[2806]: E1216 12:58:37.178417 2806 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e66b54be61c2ac546ff3de6f7a51f0313dcb45bd7665700f898721635561ce1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fhwjm" Dec 16 12:58:37.178505 kubelet[2806]: E1216 12:58:37.178432 2806 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e66b54be61c2ac546ff3de6f7a51f0313dcb45bd7665700f898721635561ce1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fhwjm" Dec 16 12:58:37.178505 kubelet[2806]: E1216 12:58:37.178466 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fhwjm_kube-system(ab274844-97cc-406d-90f8-4c834c435e0c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fhwjm_kube-system(ab274844-97cc-406d-90f8-4c834c435e0c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e66b54be61c2ac546ff3de6f7a51f0313dcb45bd7665700f898721635561ce1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fhwjm" podUID="ab274844-97cc-406d-90f8-4c834c435e0c" Dec 16 12:58:37.360452 systemd[1]: run-netns-cni\x2d90e52fa0\x2dc405\x2d730b\x2de13e\x2d2f662a36da18.mount: Deactivated successfully. Dec 16 12:58:40.804176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4242690568.mount: Deactivated successfully. Dec 16 12:58:41.552816 containerd[1605]: time="2025-12-16T12:58:41.552726476Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:41.554769 containerd[1605]: time="2025-12-16T12:58:41.554718215Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Dec 16 12:58:41.555998 containerd[1605]: time="2025-12-16T12:58:41.555956389Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:41.568025 containerd[1605]: time="2025-12-16T12:58:41.567980334Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:58:41.568646 containerd[1605]: time="2025-12-16T12:58:41.568608793Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 4.642866082s" Dec 16 12:58:41.568646 containerd[1605]: time="2025-12-16T12:58:41.568639811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 16 12:58:41.586972 containerd[1605]: time="2025-12-16T12:58:41.586901877Z" level=info msg="CreateContainer within sandbox \"27a0ccf4ee113f1159053578dab244fdd9db75a44c56bc30812920e6e5121695\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 12:58:41.601312 containerd[1605]: time="2025-12-16T12:58:41.601260223Z" level=info msg="Container dcb37c16868db76f5b823a6ddbacfe7ae12e9ff98840839a404ab877e0b7765d: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:58:41.613622 containerd[1605]: time="2025-12-16T12:58:41.613548072Z" level=info msg="CreateContainer within sandbox \"27a0ccf4ee113f1159053578dab244fdd9db75a44c56bc30812920e6e5121695\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"dcb37c16868db76f5b823a6ddbacfe7ae12e9ff98840839a404ab877e0b7765d\"" Dec 16 12:58:41.616429 containerd[1605]: time="2025-12-16T12:58:41.614189367Z" level=info msg="StartContainer for \"dcb37c16868db76f5b823a6ddbacfe7ae12e9ff98840839a404ab877e0b7765d\"" Dec 16 12:58:41.616429 containerd[1605]: time="2025-12-16T12:58:41.615608080Z" level=info msg="connecting to shim dcb37c16868db76f5b823a6ddbacfe7ae12e9ff98840839a404ab877e0b7765d" address="unix:///run/containerd/s/157f9e5b4af118944988a8cd012c7837aa74cf01d6ad26679c715693005e6b9e" protocol=ttrpc version=3 Dec 16 12:58:41.650003 systemd[1]: Started cri-containerd-dcb37c16868db76f5b823a6ddbacfe7ae12e9ff98840839a404ab877e0b7765d.scope - libcontainer container dcb37c16868db76f5b823a6ddbacfe7ae12e9ff98840839a404ab877e0b7765d. Dec 16 12:58:41.715000 audit: BPF prog-id=165 op=LOAD Dec 16 12:58:41.717219 kernel: kauditd_printk_skb: 22 callbacks suppressed Dec 16 12:58:41.717278 kernel: audit: type=1334 audit(1765889921.715:549): prog-id=165 op=LOAD Dec 16 12:58:41.715000 audit[3969]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3393 pid=3969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:41.723602 kernel: audit: type=1300 audit(1765889921.715:549): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=3393 pid=3969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:41.723644 kernel: audit: type=1327 audit(1765889921.715:549): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463623337633136383638646237366635623832336136646462616366 Dec 16 12:58:41.715000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463623337633136383638646237366635623832336136646462616366 Dec 16 12:58:41.715000 audit: BPF prog-id=166 op=LOAD Dec 16 12:58:41.730346 kernel: audit: type=1334 audit(1765889921.715:550): prog-id=166 op=LOAD Dec 16 12:58:41.715000 audit[3969]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3393 pid=3969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:41.735774 kernel: audit: type=1300 audit(1765889921.715:550): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=3393 pid=3969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:41.715000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463623337633136383638646237366635623832336136646462616366 Dec 16 12:58:41.740851 kernel: audit: type=1327 audit(1765889921.715:550): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463623337633136383638646237366635623832336136646462616366 Dec 16 12:58:41.740910 kernel: audit: type=1334 audit(1765889921.715:551): prog-id=166 op=UNLOAD Dec 16 12:58:41.715000 audit: BPF prog-id=166 op=UNLOAD Dec 16 12:58:41.715000 audit[3969]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3393 pid=3969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:41.747150 kernel: audit: type=1300 audit(1765889921.715:551): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3393 pid=3969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:41.715000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463623337633136383638646237366635623832336136646462616366 Dec 16 12:58:41.752449 kernel: audit: type=1327 audit(1765889921.715:551): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463623337633136383638646237366635623832336136646462616366 Dec 16 12:58:41.753941 kernel: audit: type=1334 audit(1765889921.715:552): prog-id=165 op=UNLOAD Dec 16 12:58:41.715000 audit: BPF prog-id=165 op=UNLOAD Dec 16 12:58:41.715000 audit[3969]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3393 pid=3969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:41.715000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463623337633136383638646237366635623832336136646462616366 Dec 16 12:58:41.715000 audit: BPF prog-id=167 op=LOAD Dec 16 12:58:41.715000 audit[3969]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=3393 pid=3969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:41.715000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463623337633136383638646237366635623832336136646462616366 Dec 16 12:58:41.754686 containerd[1605]: time="2025-12-16T12:58:41.754646050Z" level=info msg="StartContainer for \"dcb37c16868db76f5b823a6ddbacfe7ae12e9ff98840839a404ab877e0b7765d\" returns successfully" Dec 16 12:58:41.832337 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 12:58:41.832442 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 12:58:41.943348 kubelet[2806]: E1216 12:58:41.943114 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:41.955472 kubelet[2806]: I1216 12:58:41.955409 2806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vrqxg" podStartSLOduration=1.384278314 podStartE2EDuration="15.955393614s" podCreationTimestamp="2025-12-16 12:58:26 +0000 UTC" firstStartedPulling="2025-12-16 12:58:26.998147562 +0000 UTC m=+23.291286046" lastFinishedPulling="2025-12-16 12:58:41.569262862 +0000 UTC m=+37.862401346" observedRunningTime="2025-12-16 12:58:41.95509842 +0000 UTC m=+38.248236904" watchObservedRunningTime="2025-12-16 12:58:41.955393614 +0000 UTC m=+38.248532098" Dec 16 12:58:42.030251 kubelet[2806]: I1216 12:58:42.030204 2806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08308b23-48ef-4e90-90c4-1f977d90ef17-whisker-ca-bundle\") pod \"08308b23-48ef-4e90-90c4-1f977d90ef17\" (UID: \"08308b23-48ef-4e90-90c4-1f977d90ef17\") " Dec 16 12:58:42.030251 kubelet[2806]: I1216 12:58:42.030241 2806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/08308b23-48ef-4e90-90c4-1f977d90ef17-whisker-backend-key-pair\") pod \"08308b23-48ef-4e90-90c4-1f977d90ef17\" (UID: \"08308b23-48ef-4e90-90c4-1f977d90ef17\") " Dec 16 12:58:42.030251 kubelet[2806]: I1216 12:58:42.030261 2806 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48zvw\" (UniqueName: \"kubernetes.io/projected/08308b23-48ef-4e90-90c4-1f977d90ef17-kube-api-access-48zvw\") pod \"08308b23-48ef-4e90-90c4-1f977d90ef17\" (UID: \"08308b23-48ef-4e90-90c4-1f977d90ef17\") " Dec 16 12:58:42.031040 kubelet[2806]: I1216 12:58:42.030991 2806 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08308b23-48ef-4e90-90c4-1f977d90ef17-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "08308b23-48ef-4e90-90c4-1f977d90ef17" (UID: "08308b23-48ef-4e90-90c4-1f977d90ef17"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 12:58:42.035868 kubelet[2806]: I1216 12:58:42.034853 2806 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08308b23-48ef-4e90-90c4-1f977d90ef17-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "08308b23-48ef-4e90-90c4-1f977d90ef17" (UID: "08308b23-48ef-4e90-90c4-1f977d90ef17"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 12:58:42.035948 systemd[1]: var-lib-kubelet-pods-08308b23\x2d48ef\x2d4e90\x2d90c4\x2d1f977d90ef17-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d48zvw.mount: Deactivated successfully. Dec 16 12:58:42.036184 kubelet[2806]: I1216 12:58:42.036017 2806 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08308b23-48ef-4e90-90c4-1f977d90ef17-kube-api-access-48zvw" (OuterVolumeSpecName: "kube-api-access-48zvw") pod "08308b23-48ef-4e90-90c4-1f977d90ef17" (UID: "08308b23-48ef-4e90-90c4-1f977d90ef17"). InnerVolumeSpecName "kube-api-access-48zvw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:58:42.036341 systemd[1]: var-lib-kubelet-pods-08308b23\x2d48ef\x2d4e90\x2d90c4\x2d1f977d90ef17-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 12:58:42.131520 kubelet[2806]: I1216 12:58:42.131459 2806 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/08308b23-48ef-4e90-90c4-1f977d90ef17-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Dec 16 12:58:42.131520 kubelet[2806]: I1216 12:58:42.131495 2806 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-48zvw\" (UniqueName: \"kubernetes.io/projected/08308b23-48ef-4e90-90c4-1f977d90ef17-kube-api-access-48zvw\") on node \"localhost\" DevicePath \"\"" Dec 16 12:58:42.131520 kubelet[2806]: I1216 12:58:42.131504 2806 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08308b23-48ef-4e90-90c4-1f977d90ef17-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Dec 16 12:58:42.249507 systemd[1]: Removed slice kubepods-besteffort-pod08308b23_48ef_4e90_90c4_1f977d90ef17.slice - libcontainer container kubepods-besteffort-pod08308b23_48ef_4e90_90c4_1f977d90ef17.slice. Dec 16 12:58:42.386818 systemd[1]: Created slice kubepods-besteffort-pod0412d3e0_d5c8_47ca_9e38_144ba2ec1a92.slice - libcontainer container kubepods-besteffort-pod0412d3e0_d5c8_47ca_9e38_144ba2ec1a92.slice. Dec 16 12:58:42.434454 kubelet[2806]: I1216 12:58:42.433297 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0412d3e0-d5c8-47ca-9e38-144ba2ec1a92-whisker-ca-bundle\") pod \"whisker-6c9b9b98c4-zxd2x\" (UID: \"0412d3e0-d5c8-47ca-9e38-144ba2ec1a92\") " pod="calico-system/whisker-6c9b9b98c4-zxd2x" Dec 16 12:58:42.434454 kubelet[2806]: I1216 12:58:42.434366 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kgm8\" (UniqueName: \"kubernetes.io/projected/0412d3e0-d5c8-47ca-9e38-144ba2ec1a92-kube-api-access-8kgm8\") pod \"whisker-6c9b9b98c4-zxd2x\" (UID: \"0412d3e0-d5c8-47ca-9e38-144ba2ec1a92\") " pod="calico-system/whisker-6c9b9b98c4-zxd2x" Dec 16 12:58:42.434454 kubelet[2806]: I1216 12:58:42.434396 2806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0412d3e0-d5c8-47ca-9e38-144ba2ec1a92-whisker-backend-key-pair\") pod \"whisker-6c9b9b98c4-zxd2x\" (UID: \"0412d3e0-d5c8-47ca-9e38-144ba2ec1a92\") " pod="calico-system/whisker-6c9b9b98c4-zxd2x" Dec 16 12:58:42.690485 containerd[1605]: time="2025-12-16T12:58:42.690289611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c9b9b98c4-zxd2x,Uid:0412d3e0-d5c8-47ca-9e38-144ba2ec1a92,Namespace:calico-system,Attempt:0,}" Dec 16 12:58:42.844108 systemd-networkd[1515]: cali53d9b61001b: Link UP Dec 16 12:58:42.844890 systemd-networkd[1515]: cali53d9b61001b: Gained carrier Dec 16 12:58:42.857401 containerd[1605]: 2025-12-16 12:58:42.714 [INFO][4040] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:58:42.857401 containerd[1605]: 2025-12-16 12:58:42.734 [INFO][4040] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6c9b9b98c4--zxd2x-eth0 whisker-6c9b9b98c4- calico-system 0412d3e0-d5c8-47ca-9e38-144ba2ec1a92 940 0 2025-12-16 12:58:42 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6c9b9b98c4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6c9b9b98c4-zxd2x eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali53d9b61001b [] [] }} ContainerID="fcf516dce428e909c2bc5ae671cc908cf3c8d29e458fef903dbb912ae3a8d2d0" Namespace="calico-system" Pod="whisker-6c9b9b98c4-zxd2x" WorkloadEndpoint="localhost-k8s-whisker--6c9b9b98c4--zxd2x-" Dec 16 12:58:42.857401 containerd[1605]: 2025-12-16 12:58:42.734 [INFO][4040] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fcf516dce428e909c2bc5ae671cc908cf3c8d29e458fef903dbb912ae3a8d2d0" Namespace="calico-system" Pod="whisker-6c9b9b98c4-zxd2x" WorkloadEndpoint="localhost-k8s-whisker--6c9b9b98c4--zxd2x-eth0" Dec 16 12:58:42.857401 containerd[1605]: 2025-12-16 12:58:42.798 [INFO][4054] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fcf516dce428e909c2bc5ae671cc908cf3c8d29e458fef903dbb912ae3a8d2d0" HandleID="k8s-pod-network.fcf516dce428e909c2bc5ae671cc908cf3c8d29e458fef903dbb912ae3a8d2d0" Workload="localhost-k8s-whisker--6c9b9b98c4--zxd2x-eth0" Dec 16 12:58:42.857673 containerd[1605]: 2025-12-16 12:58:42.799 [INFO][4054] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fcf516dce428e909c2bc5ae671cc908cf3c8d29e458fef903dbb912ae3a8d2d0" HandleID="k8s-pod-network.fcf516dce428e909c2bc5ae671cc908cf3c8d29e458fef903dbb912ae3a8d2d0" Workload="localhost-k8s-whisker--6c9b9b98c4--zxd2x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001223b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6c9b9b98c4-zxd2x", "timestamp":"2025-12-16 12:58:42.798577257 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:58:42.857673 containerd[1605]: 2025-12-16 12:58:42.799 [INFO][4054] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:58:42.857673 containerd[1605]: 2025-12-16 12:58:42.799 [INFO][4054] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:58:42.857673 containerd[1605]: 2025-12-16 12:58:42.799 [INFO][4054] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:58:42.857673 containerd[1605]: 2025-12-16 12:58:42.808 [INFO][4054] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fcf516dce428e909c2bc5ae671cc908cf3c8d29e458fef903dbb912ae3a8d2d0" host="localhost" Dec 16 12:58:42.857673 containerd[1605]: 2025-12-16 12:58:42.814 [INFO][4054] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:58:42.857673 containerd[1605]: 2025-12-16 12:58:42.818 [INFO][4054] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:58:42.857673 containerd[1605]: 2025-12-16 12:58:42.820 [INFO][4054] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:58:42.857673 containerd[1605]: 2025-12-16 12:58:42.821 [INFO][4054] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:58:42.857673 containerd[1605]: 2025-12-16 12:58:42.821 [INFO][4054] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fcf516dce428e909c2bc5ae671cc908cf3c8d29e458fef903dbb912ae3a8d2d0" host="localhost" Dec 16 12:58:42.858020 containerd[1605]: 2025-12-16 12:58:42.823 [INFO][4054] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fcf516dce428e909c2bc5ae671cc908cf3c8d29e458fef903dbb912ae3a8d2d0 Dec 16 12:58:42.858020 containerd[1605]: 2025-12-16 12:58:42.826 [INFO][4054] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fcf516dce428e909c2bc5ae671cc908cf3c8d29e458fef903dbb912ae3a8d2d0" host="localhost" Dec 16 12:58:42.858020 containerd[1605]: 2025-12-16 12:58:42.832 [INFO][4054] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.fcf516dce428e909c2bc5ae671cc908cf3c8d29e458fef903dbb912ae3a8d2d0" host="localhost" Dec 16 12:58:42.858020 containerd[1605]: 2025-12-16 12:58:42.832 [INFO][4054] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.fcf516dce428e909c2bc5ae671cc908cf3c8d29e458fef903dbb912ae3a8d2d0" host="localhost" Dec 16 12:58:42.858020 containerd[1605]: 2025-12-16 12:58:42.832 [INFO][4054] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:58:42.858020 containerd[1605]: 2025-12-16 12:58:42.832 [INFO][4054] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="fcf516dce428e909c2bc5ae671cc908cf3c8d29e458fef903dbb912ae3a8d2d0" HandleID="k8s-pod-network.fcf516dce428e909c2bc5ae671cc908cf3c8d29e458fef903dbb912ae3a8d2d0" Workload="localhost-k8s-whisker--6c9b9b98c4--zxd2x-eth0" Dec 16 12:58:42.858194 containerd[1605]: 2025-12-16 12:58:42.835 [INFO][4040] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fcf516dce428e909c2bc5ae671cc908cf3c8d29e458fef903dbb912ae3a8d2d0" Namespace="calico-system" Pod="whisker-6c9b9b98c4-zxd2x" WorkloadEndpoint="localhost-k8s-whisker--6c9b9b98c4--zxd2x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6c9b9b98c4--zxd2x-eth0", GenerateName:"whisker-6c9b9b98c4-", Namespace:"calico-system", SelfLink:"", UID:"0412d3e0-d5c8-47ca-9e38-144ba2ec1a92", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 58, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c9b9b98c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6c9b9b98c4-zxd2x", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali53d9b61001b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:58:42.858194 containerd[1605]: 2025-12-16 12:58:42.836 [INFO][4040] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="fcf516dce428e909c2bc5ae671cc908cf3c8d29e458fef903dbb912ae3a8d2d0" Namespace="calico-system" Pod="whisker-6c9b9b98c4-zxd2x" WorkloadEndpoint="localhost-k8s-whisker--6c9b9b98c4--zxd2x-eth0" Dec 16 12:58:42.858307 containerd[1605]: 2025-12-16 12:58:42.836 [INFO][4040] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali53d9b61001b ContainerID="fcf516dce428e909c2bc5ae671cc908cf3c8d29e458fef903dbb912ae3a8d2d0" Namespace="calico-system" Pod="whisker-6c9b9b98c4-zxd2x" WorkloadEndpoint="localhost-k8s-whisker--6c9b9b98c4--zxd2x-eth0" Dec 16 12:58:42.858307 containerd[1605]: 2025-12-16 12:58:42.844 [INFO][4040] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fcf516dce428e909c2bc5ae671cc908cf3c8d29e458fef903dbb912ae3a8d2d0" Namespace="calico-system" Pod="whisker-6c9b9b98c4-zxd2x" WorkloadEndpoint="localhost-k8s-whisker--6c9b9b98c4--zxd2x-eth0" Dec 16 12:58:42.858369 containerd[1605]: 2025-12-16 12:58:42.844 [INFO][4040] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fcf516dce428e909c2bc5ae671cc908cf3c8d29e458fef903dbb912ae3a8d2d0" Namespace="calico-system" Pod="whisker-6c9b9b98c4-zxd2x" WorkloadEndpoint="localhost-k8s-whisker--6c9b9b98c4--zxd2x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6c9b9b98c4--zxd2x-eth0", GenerateName:"whisker-6c9b9b98c4-", Namespace:"calico-system", SelfLink:"", UID:"0412d3e0-d5c8-47ca-9e38-144ba2ec1a92", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 58, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c9b9b98c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fcf516dce428e909c2bc5ae671cc908cf3c8d29e458fef903dbb912ae3a8d2d0", Pod:"whisker-6c9b9b98c4-zxd2x", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali53d9b61001b", MAC:"1a:f6:81:05:5e:95", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:58:42.858442 containerd[1605]: 2025-12-16 12:58:42.854 [INFO][4040] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fcf516dce428e909c2bc5ae671cc908cf3c8d29e458fef903dbb912ae3a8d2d0" Namespace="calico-system" Pod="whisker-6c9b9b98c4-zxd2x" WorkloadEndpoint="localhost-k8s-whisker--6c9b9b98c4--zxd2x-eth0" Dec 16 12:58:42.909415 containerd[1605]: time="2025-12-16T12:58:42.909357061Z" level=info msg="connecting to shim fcf516dce428e909c2bc5ae671cc908cf3c8d29e458fef903dbb912ae3a8d2d0" address="unix:///run/containerd/s/66fffe9b02625691480805033dfdb061c67bdb8e13c132f578b67da34d44a573" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:58:42.943041 systemd[1]: Started cri-containerd-fcf516dce428e909c2bc5ae671cc908cf3c8d29e458fef903dbb912ae3a8d2d0.scope - libcontainer container fcf516dce428e909c2bc5ae671cc908cf3c8d29e458fef903dbb912ae3a8d2d0. Dec 16 12:58:42.944261 kubelet[2806]: I1216 12:58:42.944233 2806 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:58:42.944903 kubelet[2806]: E1216 12:58:42.944868 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:42.955000 audit: BPF prog-id=168 op=LOAD Dec 16 12:58:42.955000 audit: BPF prog-id=169 op=LOAD Dec 16 12:58:42.955000 audit[4086]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4074 pid=4086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:42.955000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663663531366463653432386539303963326263356165363731636339 Dec 16 12:58:42.955000 audit: BPF prog-id=169 op=UNLOAD Dec 16 12:58:42.955000 audit[4086]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4074 pid=4086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:42.955000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663663531366463653432386539303963326263356165363731636339 Dec 16 12:58:42.956000 audit: BPF prog-id=170 op=LOAD Dec 16 12:58:42.956000 audit[4086]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4074 pid=4086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:42.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663663531366463653432386539303963326263356165363731636339 Dec 16 12:58:42.956000 audit: BPF prog-id=171 op=LOAD Dec 16 12:58:42.956000 audit[4086]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4074 pid=4086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:42.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663663531366463653432386539303963326263356165363731636339 Dec 16 12:58:42.956000 audit: BPF prog-id=171 op=UNLOAD Dec 16 12:58:42.956000 audit[4086]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4074 pid=4086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:42.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663663531366463653432386539303963326263356165363731636339 Dec 16 12:58:42.956000 audit: BPF prog-id=170 op=UNLOAD Dec 16 12:58:42.956000 audit[4086]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4074 pid=4086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:42.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663663531366463653432386539303963326263356165363731636339 Dec 16 12:58:42.956000 audit: BPF prog-id=172 op=LOAD Dec 16 12:58:42.956000 audit[4086]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4074 pid=4086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:42.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6663663531366463653432386539303963326263356165363731636339 Dec 16 12:58:42.958061 systemd-resolved[1370]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:58:42.993238 containerd[1605]: time="2025-12-16T12:58:42.993182574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c9b9b98c4-zxd2x,Uid:0412d3e0-d5c8-47ca-9e38-144ba2ec1a92,Namespace:calico-system,Attempt:0,} returns sandbox id \"fcf516dce428e909c2bc5ae671cc908cf3c8d29e458fef903dbb912ae3a8d2d0\"" Dec 16 12:58:42.994647 containerd[1605]: time="2025-12-16T12:58:42.994614423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:58:43.376368 containerd[1605]: time="2025-12-16T12:58:43.376288464Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:58:43.433187 containerd[1605]: time="2025-12-16T12:58:43.433111560Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:58:43.433187 containerd[1605]: time="2025-12-16T12:58:43.433182794Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:58:43.435285 kubelet[2806]: E1216 12:58:43.435242 2806 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:58:43.435374 kubelet[2806]: E1216 12:58:43.435288 2806 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:58:43.440543 kubelet[2806]: E1216 12:58:43.440487 2806 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e22e1d47bcea42178d38e488f88370a3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8kgm8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c9b9b98c4-zxd2x_calico-system(0412d3e0-d5c8-47ca-9e38-144ba2ec1a92): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:58:43.442547 containerd[1605]: time="2025-12-16T12:58:43.442343470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:58:43.795468 containerd[1605]: time="2025-12-16T12:58:43.795417334Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:58:43.798407 kubelet[2806]: I1216 12:58:43.798372 2806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08308b23-48ef-4e90-90c4-1f977d90ef17" path="/var/lib/kubelet/pods/08308b23-48ef-4e90-90c4-1f977d90ef17/volumes" Dec 16 12:58:43.861112 containerd[1605]: time="2025-12-16T12:58:43.861061999Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:58:43.861220 containerd[1605]: time="2025-12-16T12:58:43.861135146Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:58:43.861269 kubelet[2806]: E1216 12:58:43.861240 2806 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:58:43.861305 kubelet[2806]: E1216 12:58:43.861272 2806 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:58:43.861417 kubelet[2806]: E1216 12:58:43.861379 2806 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8kgm8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c9b9b98c4-zxd2x_calico-system(0412d3e0-d5c8-47ca-9e38-144ba2ec1a92): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:58:43.862791 kubelet[2806]: E1216 12:58:43.862758 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c9b9b98c4-zxd2x" podUID="0412d3e0-d5c8-47ca-9e38-144ba2ec1a92" Dec 16 12:58:43.947578 kubelet[2806]: E1216 12:58:43.947516 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c9b9b98c4-zxd2x" podUID="0412d3e0-d5c8-47ca-9e38-144ba2ec1a92" Dec 16 12:58:44.110000 audit[4217]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=4217 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:44.110000 audit[4217]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffca328bd70 a2=0 a3=7ffca328bd5c items=0 ppid=2937 pid=4217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:44.110000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:44.118000 audit[4217]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=4217 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:44.118000 audit[4217]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffca328bd70 a2=0 a3=0 items=0 ppid=2937 pid=4217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:44.118000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:44.506046 systemd-networkd[1515]: cali53d9b61001b: Gained IPv6LL Dec 16 12:58:44.624804 systemd[1]: Started sshd@7-10.0.0.102:22-10.0.0.1:49826.service - OpenSSH per-connection server daemon (10.0.0.1:49826). Dec 16 12:58:44.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.102:22-10.0.0.1:49826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:58:44.709000 audit[4242]: USER_ACCT pid=4242 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:58:44.710515 sshd[4242]: Accepted publickey for core from 10.0.0.1 port 49826 ssh2: RSA SHA256:Nb0q9jxD4EhyUGvlUh0tlGIiDz42DR960XQ2mSo6eQ4 Dec 16 12:58:44.710000 audit[4242]: CRED_ACQ pid=4242 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:58:44.710000 audit[4242]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffefc881d50 a2=3 a3=0 items=0 ppid=1 pid=4242 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:44.710000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:58:44.712002 sshd-session[4242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:58:44.716495 systemd-logind[1584]: New session 8 of user core. Dec 16 12:58:44.730124 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 12:58:44.732000 audit[4242]: USER_START pid=4242 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:58:44.734000 audit[4247]: CRED_ACQ pid=4247 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:58:44.823516 sshd[4247]: Connection closed by 10.0.0.1 port 49826 Dec 16 12:58:44.823854 sshd-session[4242]: pam_unix(sshd:session): session closed for user core Dec 16 12:58:44.824000 audit[4242]: USER_END pid=4242 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:58:44.824000 audit[4242]: CRED_DISP pid=4242 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:58:44.830347 systemd[1]: sshd@7-10.0.0.102:22-10.0.0.1:49826.service: Deactivated successfully. Dec 16 12:58:44.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.102:22-10.0.0.1:49826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:58:44.832501 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 12:58:44.834577 systemd-logind[1584]: Session 8 logged out. Waiting for processes to exit. Dec 16 12:58:44.836245 systemd-logind[1584]: Removed session 8. Dec 16 12:58:44.950127 kubelet[2806]: E1216 12:58:44.949998 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c9b9b98c4-zxd2x" podUID="0412d3e0-d5c8-47ca-9e38-144ba2ec1a92" Dec 16 12:58:46.381720 kubelet[2806]: I1216 12:58:46.381640 2806 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:58:46.382231 kubelet[2806]: E1216 12:58:46.382141 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:46.501000 audit[4307]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=4307 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:46.501000 audit[4307]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd360c1a80 a2=0 a3=7ffd360c1a6c items=0 ppid=2937 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:46.501000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:46.507000 audit[4307]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=4307 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:46.507000 audit[4307]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd360c1a80 a2=0 a3=7ffd360c1a6c items=0 ppid=2937 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:46.507000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:46.952966 kubelet[2806]: E1216 12:58:46.952930 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:47.466000 audit: BPF prog-id=173 op=LOAD Dec 16 12:58:47.469873 kernel: kauditd_printk_skb: 50 callbacks suppressed Dec 16 12:58:47.469985 kernel: audit: type=1334 audit(1765889927.466:575): prog-id=173 op=LOAD Dec 16 12:58:47.466000 audit[4346]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd4dff5750 a2=98 a3=1fffffffffffffff items=0 ppid=4329 pid=4346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.476744 kernel: audit: type=1300 audit(1765889927.466:575): arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd4dff5750 a2=98 a3=1fffffffffffffff items=0 ppid=4329 pid=4346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.476816 kernel: audit: type=1327 audit(1765889927.466:575): proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:58:47.466000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:58:47.482839 kernel: audit: type=1334 audit(1765889927.466:576): prog-id=173 op=UNLOAD Dec 16 12:58:47.466000 audit: BPF prog-id=173 op=UNLOAD Dec 16 12:58:47.466000 audit[4346]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd4dff5720 a3=0 items=0 ppid=4329 pid=4346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.490057 kernel: audit: type=1300 audit(1765889927.466:576): arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd4dff5720 a3=0 items=0 ppid=4329 pid=4346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.466000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:58:47.495742 kernel: audit: type=1327 audit(1765889927.466:576): proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:58:47.495873 kernel: audit: type=1334 audit(1765889927.466:577): prog-id=174 op=LOAD Dec 16 12:58:47.466000 audit: BPF prog-id=174 op=LOAD Dec 16 12:58:47.497206 kernel: audit: type=1300 audit(1765889927.466:577): arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd4dff5630 a2=94 a3=3 items=0 ppid=4329 pid=4346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.466000 audit[4346]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd4dff5630 a2=94 a3=3 items=0 ppid=4329 pid=4346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.466000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:58:47.507812 kernel: audit: type=1327 audit(1765889927.466:577): proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:58:47.508055 kernel: audit: type=1334 audit(1765889927.466:578): prog-id=174 op=UNLOAD Dec 16 12:58:47.466000 audit: BPF prog-id=174 op=UNLOAD Dec 16 12:58:47.466000 audit[4346]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd4dff5630 a2=94 a3=3 items=0 ppid=4329 pid=4346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.466000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:58:47.466000 audit: BPF prog-id=175 op=LOAD Dec 16 12:58:47.466000 audit[4346]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd4dff5670 a2=94 a3=7ffd4dff5850 items=0 ppid=4329 pid=4346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.466000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:58:47.466000 audit: BPF prog-id=175 op=UNLOAD Dec 16 12:58:47.466000 audit[4346]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd4dff5670 a2=94 a3=7ffd4dff5850 items=0 ppid=4329 pid=4346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.466000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 12:58:47.468000 audit: BPF prog-id=176 op=LOAD Dec 16 12:58:47.468000 audit[4347]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff6a753ae0 a2=98 a3=3 items=0 ppid=4329 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.468000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:58:47.468000 audit: BPF prog-id=176 op=UNLOAD Dec 16 12:58:47.468000 audit[4347]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff6a753ab0 a3=0 items=0 ppid=4329 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.468000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:58:47.468000 audit: BPF prog-id=177 op=LOAD Dec 16 12:58:47.468000 audit[4347]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff6a7538d0 a2=94 a3=54428f items=0 ppid=4329 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.468000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:58:47.468000 audit: BPF prog-id=177 op=UNLOAD Dec 16 12:58:47.468000 audit[4347]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff6a7538d0 a2=94 a3=54428f items=0 ppid=4329 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.468000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:58:47.468000 audit: BPF prog-id=178 op=LOAD Dec 16 12:58:47.468000 audit[4347]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff6a753900 a2=94 a3=2 items=0 ppid=4329 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.468000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:58:47.468000 audit: BPF prog-id=178 op=UNLOAD Dec 16 12:58:47.468000 audit[4347]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff6a753900 a2=0 a3=2 items=0 ppid=4329 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.468000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:58:47.681000 audit: BPF prog-id=179 op=LOAD Dec 16 12:58:47.681000 audit[4347]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff6a7537c0 a2=94 a3=1 items=0 ppid=4329 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.681000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:58:47.681000 audit: BPF prog-id=179 op=UNLOAD Dec 16 12:58:47.681000 audit[4347]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff6a7537c0 a2=94 a3=1 items=0 ppid=4329 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.681000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:58:47.690000 audit: BPF prog-id=180 op=LOAD Dec 16 12:58:47.690000 audit[4347]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff6a7537b0 a2=94 a3=4 items=0 ppid=4329 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.690000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:58:47.690000 audit: BPF prog-id=180 op=UNLOAD Dec 16 12:58:47.690000 audit[4347]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff6a7537b0 a2=0 a3=4 items=0 ppid=4329 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.690000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:58:47.690000 audit: BPF prog-id=181 op=LOAD Dec 16 12:58:47.690000 audit[4347]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff6a753610 a2=94 a3=5 items=0 ppid=4329 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.690000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:58:47.690000 audit: BPF prog-id=181 op=UNLOAD Dec 16 12:58:47.690000 audit[4347]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff6a753610 a2=0 a3=5 items=0 ppid=4329 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.690000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:58:47.690000 audit: BPF prog-id=182 op=LOAD Dec 16 12:58:47.690000 audit[4347]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff6a753830 a2=94 a3=6 items=0 ppid=4329 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.690000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:58:47.690000 audit: BPF prog-id=182 op=UNLOAD Dec 16 12:58:47.690000 audit[4347]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff6a753830 a2=0 a3=6 items=0 ppid=4329 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.690000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:58:47.691000 audit: BPF prog-id=183 op=LOAD Dec 16 12:58:47.691000 audit[4347]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff6a752fe0 a2=94 a3=88 items=0 ppid=4329 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.691000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:58:47.691000 audit: BPF prog-id=184 op=LOAD Dec 16 12:58:47.691000 audit[4347]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff6a752e60 a2=94 a3=2 items=0 ppid=4329 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.691000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:58:47.691000 audit: BPF prog-id=184 op=UNLOAD Dec 16 12:58:47.691000 audit[4347]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff6a752e90 a2=0 a3=7fff6a752f90 items=0 ppid=4329 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.691000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:58:47.692000 audit: BPF prog-id=183 op=UNLOAD Dec 16 12:58:47.692000 audit[4347]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=9886d10 a2=0 a3=fb55b347ab04d1e items=0 ppid=4329 pid=4347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.692000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 12:58:47.701000 audit: BPF prog-id=185 op=LOAD Dec 16 12:58:47.701000 audit[4350]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeeaea1260 a2=98 a3=1999999999999999 items=0 ppid=4329 pid=4350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.701000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:58:47.701000 audit: BPF prog-id=185 op=UNLOAD Dec 16 12:58:47.701000 audit[4350]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffeeaea1230 a3=0 items=0 ppid=4329 pid=4350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.701000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:58:47.701000 audit: BPF prog-id=186 op=LOAD Dec 16 12:58:47.701000 audit[4350]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeeaea1140 a2=94 a3=ffff items=0 ppid=4329 pid=4350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.701000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:58:47.701000 audit: BPF prog-id=186 op=UNLOAD Dec 16 12:58:47.701000 audit[4350]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffeeaea1140 a2=94 a3=ffff items=0 ppid=4329 pid=4350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.701000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:58:47.701000 audit: BPF prog-id=187 op=LOAD Dec 16 12:58:47.701000 audit[4350]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeeaea1180 a2=94 a3=7ffeeaea1360 items=0 ppid=4329 pid=4350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.701000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:58:47.701000 audit: BPF prog-id=187 op=UNLOAD Dec 16 12:58:47.701000 audit[4350]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffeeaea1180 a2=94 a3=7ffeeaea1360 items=0 ppid=4329 pid=4350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.701000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 12:58:47.760343 systemd-networkd[1515]: vxlan.calico: Link UP Dec 16 12:58:47.760355 systemd-networkd[1515]: vxlan.calico: Gained carrier Dec 16 12:58:47.786000 audit: BPF prog-id=188 op=LOAD Dec 16 12:58:47.786000 audit[4379]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeb6f0fa70 a2=98 a3=0 items=0 ppid=4329 pid=4379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.786000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:58:47.786000 audit: BPF prog-id=188 op=UNLOAD Dec 16 12:58:47.786000 audit[4379]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffeb6f0fa40 a3=0 items=0 ppid=4329 pid=4379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.786000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:58:47.786000 audit: BPF prog-id=189 op=LOAD Dec 16 12:58:47.786000 audit[4379]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeb6f0f880 a2=94 a3=54428f items=0 ppid=4329 pid=4379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.786000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:58:47.786000 audit: BPF prog-id=189 op=UNLOAD Dec 16 12:58:47.786000 audit[4379]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffeb6f0f880 a2=94 a3=54428f items=0 ppid=4329 pid=4379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.786000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:58:47.786000 audit: BPF prog-id=190 op=LOAD Dec 16 12:58:47.786000 audit[4379]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeb6f0f8b0 a2=94 a3=2 items=0 ppid=4329 pid=4379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.786000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:58:47.786000 audit: BPF prog-id=190 op=UNLOAD Dec 16 12:58:47.786000 audit[4379]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffeb6f0f8b0 a2=0 a3=2 items=0 ppid=4329 pid=4379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.786000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:58:47.786000 audit: BPF prog-id=191 op=LOAD Dec 16 12:58:47.786000 audit[4379]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeb6f0f660 a2=94 a3=4 items=0 ppid=4329 pid=4379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.786000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:58:47.786000 audit: BPF prog-id=191 op=UNLOAD Dec 16 12:58:47.786000 audit[4379]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffeb6f0f660 a2=94 a3=4 items=0 ppid=4329 pid=4379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.786000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:58:47.786000 audit: BPF prog-id=192 op=LOAD Dec 16 12:58:47.786000 audit[4379]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeb6f0f760 a2=94 a3=7ffeb6f0f8e0 items=0 ppid=4329 pid=4379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.786000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:58:47.786000 audit: BPF prog-id=192 op=UNLOAD Dec 16 12:58:47.786000 audit[4379]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffeb6f0f760 a2=0 a3=7ffeb6f0f8e0 items=0 ppid=4329 pid=4379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.786000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:58:47.789000 audit: BPF prog-id=193 op=LOAD Dec 16 12:58:47.789000 audit[4379]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeb6f0ee90 a2=94 a3=2 items=0 ppid=4329 pid=4379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.789000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:58:47.789000 audit: BPF prog-id=193 op=UNLOAD Dec 16 12:58:47.789000 audit[4379]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffeb6f0ee90 a2=0 a3=2 items=0 ppid=4329 pid=4379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.789000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:58:47.789000 audit: BPF prog-id=194 op=LOAD Dec 16 12:58:47.789000 audit[4379]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeb6f0ef90 a2=94 a3=30 items=0 ppid=4329 pid=4379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.789000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 12:58:47.800000 audit: BPF prog-id=195 op=LOAD Dec 16 12:58:47.800000 audit[4388]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffecc24b070 a2=98 a3=0 items=0 ppid=4329 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.800000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:58:47.800000 audit: BPF prog-id=195 op=UNLOAD Dec 16 12:58:47.800000 audit[4388]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffecc24b040 a3=0 items=0 ppid=4329 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.800000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:58:47.800000 audit: BPF prog-id=196 op=LOAD Dec 16 12:58:47.800000 audit[4388]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffecc24ae60 a2=94 a3=54428f items=0 ppid=4329 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.800000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:58:47.800000 audit: BPF prog-id=196 op=UNLOAD Dec 16 12:58:47.800000 audit[4388]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffecc24ae60 a2=94 a3=54428f items=0 ppid=4329 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.800000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:58:47.800000 audit: BPF prog-id=197 op=LOAD Dec 16 12:58:47.800000 audit[4388]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffecc24ae90 a2=94 a3=2 items=0 ppid=4329 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.800000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:58:47.800000 audit: BPF prog-id=197 op=UNLOAD Dec 16 12:58:47.800000 audit[4388]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffecc24ae90 a2=0 a3=2 items=0 ppid=4329 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.800000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:58:47.988000 audit: BPF prog-id=198 op=LOAD Dec 16 12:58:47.988000 audit[4388]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffecc24ad50 a2=94 a3=1 items=0 ppid=4329 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.988000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:58:47.988000 audit: BPF prog-id=198 op=UNLOAD Dec 16 12:58:47.988000 audit[4388]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffecc24ad50 a2=94 a3=1 items=0 ppid=4329 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.988000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:58:47.997000 audit: BPF prog-id=199 op=LOAD Dec 16 12:58:47.997000 audit[4388]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffecc24ad40 a2=94 a3=4 items=0 ppid=4329 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.997000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:58:47.997000 audit: BPF prog-id=199 op=UNLOAD Dec 16 12:58:47.997000 audit[4388]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffecc24ad40 a2=0 a3=4 items=0 ppid=4329 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.997000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:58:47.998000 audit: BPF prog-id=200 op=LOAD Dec 16 12:58:47.998000 audit[4388]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffecc24aba0 a2=94 a3=5 items=0 ppid=4329 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.998000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:58:47.998000 audit: BPF prog-id=200 op=UNLOAD Dec 16 12:58:47.998000 audit[4388]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffecc24aba0 a2=0 a3=5 items=0 ppid=4329 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.998000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:58:47.998000 audit: BPF prog-id=201 op=LOAD Dec 16 12:58:47.998000 audit[4388]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffecc24adc0 a2=94 a3=6 items=0 ppid=4329 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.998000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:58:47.998000 audit: BPF prog-id=201 op=UNLOAD Dec 16 12:58:47.998000 audit[4388]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffecc24adc0 a2=0 a3=6 items=0 ppid=4329 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.998000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:58:47.998000 audit: BPF prog-id=202 op=LOAD Dec 16 12:58:47.998000 audit[4388]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffecc24a570 a2=94 a3=88 items=0 ppid=4329 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.998000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:58:47.998000 audit: BPF prog-id=203 op=LOAD Dec 16 12:58:47.998000 audit[4388]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffecc24a3f0 a2=94 a3=2 items=0 ppid=4329 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.998000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:58:47.998000 audit: BPF prog-id=203 op=UNLOAD Dec 16 12:58:47.998000 audit[4388]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffecc24a420 a2=0 a3=7ffecc24a520 items=0 ppid=4329 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.998000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:58:47.999000 audit: BPF prog-id=202 op=UNLOAD Dec 16 12:58:47.999000 audit[4388]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7482d10 a2=0 a3=d32626cb34139db8 items=0 ppid=4329 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:47.999000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 12:58:48.012000 audit: BPF prog-id=194 op=UNLOAD Dec 16 12:58:48.012000 audit[4329]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000882240 a2=0 a3=0 items=0 ppid=4117 pid=4329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:48.012000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 16 12:58:48.145000 audit[4412]: NETFILTER_CFG table=mangle:121 family=2 entries=16 op=nft_register_chain pid=4412 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:58:48.145000 audit[4412]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffde75abb10 a2=0 a3=7ffde75abafc items=0 ppid=4329 pid=4412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:48.145000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:58:48.148000 audit[4411]: NETFILTER_CFG table=nat:122 family=2 entries=15 op=nft_register_chain pid=4411 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:58:48.148000 audit[4411]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffd93128a80 a2=0 a3=7ffd93128a6c items=0 ppid=4329 pid=4411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:48.148000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:58:48.154000 audit[4410]: NETFILTER_CFG table=raw:123 family=2 entries=21 op=nft_register_chain pid=4410 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:58:48.154000 audit[4410]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7fff350512c0 a2=0 a3=7fff350512ac items=0 ppid=4329 pid=4410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:48.154000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:58:48.160000 audit[4416]: NETFILTER_CFG table=filter:124 family=2 entries=94 op=nft_register_chain pid=4416 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:58:48.160000 audit[4416]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffefcdfd9b0 a2=0 a3=7ffefcdfd99c items=0 ppid=4329 pid=4416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:48.160000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:58:48.797060 kubelet[2806]: E1216 12:58:48.797010 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:48.797597 containerd[1605]: time="2025-12-16T12:58:48.797224538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4lx5v,Uid:4e3b9dec-c7ec-4533-9b5f-135d8bcc981d,Namespace:calico-system,Attempt:0,}" Dec 16 12:58:48.797930 containerd[1605]: time="2025-12-16T12:58:48.797600192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fhwjm,Uid:ab274844-97cc-406d-90f8-4c834c435e0c,Namespace:kube-system,Attempt:0,}" Dec 16 12:58:48.962507 systemd-networkd[1515]: cali9b4d3d8693f: Link UP Dec 16 12:58:48.962703 systemd-networkd[1515]: cali9b4d3d8693f: Gained carrier Dec 16 12:58:48.976412 containerd[1605]: 2025-12-16 12:58:48.890 [INFO][4424] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--4lx5v-eth0 csi-node-driver- calico-system 4e3b9dec-c7ec-4533-9b5f-135d8bcc981d 756 0 2025-12-16 12:58:26 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-4lx5v eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9b4d3d8693f [] [] }} ContainerID="46c819e5aa97842ebc838ebd4ca9184ba08ab9daf9f9a449a0edd06dcf5a8c72" Namespace="calico-system" Pod="csi-node-driver-4lx5v" WorkloadEndpoint="localhost-k8s-csi--node--driver--4lx5v-" Dec 16 12:58:48.976412 containerd[1605]: 2025-12-16 12:58:48.890 [INFO][4424] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="46c819e5aa97842ebc838ebd4ca9184ba08ab9daf9f9a449a0edd06dcf5a8c72" Namespace="calico-system" Pod="csi-node-driver-4lx5v" WorkloadEndpoint="localhost-k8s-csi--node--driver--4lx5v-eth0" Dec 16 12:58:48.976412 containerd[1605]: 2025-12-16 12:58:48.920 [INFO][4452] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="46c819e5aa97842ebc838ebd4ca9184ba08ab9daf9f9a449a0edd06dcf5a8c72" HandleID="k8s-pod-network.46c819e5aa97842ebc838ebd4ca9184ba08ab9daf9f9a449a0edd06dcf5a8c72" Workload="localhost-k8s-csi--node--driver--4lx5v-eth0" Dec 16 12:58:48.976639 containerd[1605]: 2025-12-16 12:58:48.920 [INFO][4452] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="46c819e5aa97842ebc838ebd4ca9184ba08ab9daf9f9a449a0edd06dcf5a8c72" HandleID="k8s-pod-network.46c819e5aa97842ebc838ebd4ca9184ba08ab9daf9f9a449a0edd06dcf5a8c72" Workload="localhost-k8s-csi--node--driver--4lx5v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e560), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-4lx5v", "timestamp":"2025-12-16 12:58:48.920229656 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:58:48.976639 containerd[1605]: 2025-12-16 12:58:48.920 [INFO][4452] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:58:48.976639 containerd[1605]: 2025-12-16 12:58:48.920 [INFO][4452] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:58:48.976639 containerd[1605]: 2025-12-16 12:58:48.920 [INFO][4452] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:58:48.976639 containerd[1605]: 2025-12-16 12:58:48.928 [INFO][4452] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.46c819e5aa97842ebc838ebd4ca9184ba08ab9daf9f9a449a0edd06dcf5a8c72" host="localhost" Dec 16 12:58:48.976639 containerd[1605]: 2025-12-16 12:58:48.931 [INFO][4452] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:58:48.976639 containerd[1605]: 2025-12-16 12:58:48.935 [INFO][4452] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:58:48.976639 containerd[1605]: 2025-12-16 12:58:48.936 [INFO][4452] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:58:48.976639 containerd[1605]: 2025-12-16 12:58:48.938 [INFO][4452] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:58:48.976639 containerd[1605]: 2025-12-16 12:58:48.938 [INFO][4452] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.46c819e5aa97842ebc838ebd4ca9184ba08ab9daf9f9a449a0edd06dcf5a8c72" host="localhost" Dec 16 12:58:48.977084 containerd[1605]: 2025-12-16 12:58:48.939 [INFO][4452] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.46c819e5aa97842ebc838ebd4ca9184ba08ab9daf9f9a449a0edd06dcf5a8c72 Dec 16 12:58:48.977084 containerd[1605]: 2025-12-16 12:58:48.948 [INFO][4452] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.46c819e5aa97842ebc838ebd4ca9184ba08ab9daf9f9a449a0edd06dcf5a8c72" host="localhost" Dec 16 12:58:48.977084 containerd[1605]: 2025-12-16 12:58:48.956 [INFO][4452] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.46c819e5aa97842ebc838ebd4ca9184ba08ab9daf9f9a449a0edd06dcf5a8c72" host="localhost" Dec 16 12:58:48.977084 containerd[1605]: 2025-12-16 12:58:48.956 [INFO][4452] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.46c819e5aa97842ebc838ebd4ca9184ba08ab9daf9f9a449a0edd06dcf5a8c72" host="localhost" Dec 16 12:58:48.977084 containerd[1605]: 2025-12-16 12:58:48.956 [INFO][4452] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:58:48.977084 containerd[1605]: 2025-12-16 12:58:48.956 [INFO][4452] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="46c819e5aa97842ebc838ebd4ca9184ba08ab9daf9f9a449a0edd06dcf5a8c72" HandleID="k8s-pod-network.46c819e5aa97842ebc838ebd4ca9184ba08ab9daf9f9a449a0edd06dcf5a8c72" Workload="localhost-k8s-csi--node--driver--4lx5v-eth0" Dec 16 12:58:48.977254 containerd[1605]: 2025-12-16 12:58:48.959 [INFO][4424] cni-plugin/k8s.go 418: Populated endpoint ContainerID="46c819e5aa97842ebc838ebd4ca9184ba08ab9daf9f9a449a0edd06dcf5a8c72" Namespace="calico-system" Pod="csi-node-driver-4lx5v" WorkloadEndpoint="localhost-k8s-csi--node--driver--4lx5v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4lx5v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4e3b9dec-c7ec-4533-9b5f-135d8bcc981d", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 58, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-4lx5v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9b4d3d8693f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:58:48.977318 containerd[1605]: 2025-12-16 12:58:48.959 [INFO][4424] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="46c819e5aa97842ebc838ebd4ca9184ba08ab9daf9f9a449a0edd06dcf5a8c72" Namespace="calico-system" Pod="csi-node-driver-4lx5v" WorkloadEndpoint="localhost-k8s-csi--node--driver--4lx5v-eth0" Dec 16 12:58:48.977318 containerd[1605]: 2025-12-16 12:58:48.959 [INFO][4424] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b4d3d8693f ContainerID="46c819e5aa97842ebc838ebd4ca9184ba08ab9daf9f9a449a0edd06dcf5a8c72" Namespace="calico-system" Pod="csi-node-driver-4lx5v" WorkloadEndpoint="localhost-k8s-csi--node--driver--4lx5v-eth0" Dec 16 12:58:48.977318 containerd[1605]: 2025-12-16 12:58:48.963 [INFO][4424] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="46c819e5aa97842ebc838ebd4ca9184ba08ab9daf9f9a449a0edd06dcf5a8c72" Namespace="calico-system" Pod="csi-node-driver-4lx5v" WorkloadEndpoint="localhost-k8s-csi--node--driver--4lx5v-eth0" Dec 16 12:58:48.977380 containerd[1605]: 2025-12-16 12:58:48.963 [INFO][4424] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="46c819e5aa97842ebc838ebd4ca9184ba08ab9daf9f9a449a0edd06dcf5a8c72" Namespace="calico-system" Pod="csi-node-driver-4lx5v" WorkloadEndpoint="localhost-k8s-csi--node--driver--4lx5v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4lx5v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4e3b9dec-c7ec-4533-9b5f-135d8bcc981d", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 58, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"46c819e5aa97842ebc838ebd4ca9184ba08ab9daf9f9a449a0edd06dcf5a8c72", Pod:"csi-node-driver-4lx5v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9b4d3d8693f", MAC:"ae:3b:68:39:a7:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:58:48.977433 containerd[1605]: 2025-12-16 12:58:48.972 [INFO][4424] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="46c819e5aa97842ebc838ebd4ca9184ba08ab9daf9f9a449a0edd06dcf5a8c72" Namespace="calico-system" Pod="csi-node-driver-4lx5v" WorkloadEndpoint="localhost-k8s-csi--node--driver--4lx5v-eth0" Dec 16 12:58:48.987000 audit[4477]: NETFILTER_CFG table=filter:125 family=2 entries=36 op=nft_register_chain pid=4477 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:58:48.987000 audit[4477]: SYSCALL arch=c000003e syscall=46 success=yes exit=19576 a0=3 a1=7ffc5055c260 a2=0 a3=7ffc5055c24c items=0 ppid=4329 pid=4477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:48.987000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:58:48.999080 containerd[1605]: time="2025-12-16T12:58:48.998691362Z" level=info msg="connecting to shim 46c819e5aa97842ebc838ebd4ca9184ba08ab9daf9f9a449a0edd06dcf5a8c72" address="unix:///run/containerd/s/58718fda3d66b4ab0861e0e19d9e6b5ccd34e780866f8d846a39cd0d312815c5" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:58:49.034051 systemd[1]: Started cri-containerd-46c819e5aa97842ebc838ebd4ca9184ba08ab9daf9f9a449a0edd06dcf5a8c72.scope - libcontainer container 46c819e5aa97842ebc838ebd4ca9184ba08ab9daf9f9a449a0edd06dcf5a8c72. Dec 16 12:58:49.047000 audit: BPF prog-id=204 op=LOAD Dec 16 12:58:49.052000 audit: BPF prog-id=205 op=LOAD Dec 16 12:58:49.052000 audit[4497]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4486 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:49.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436633831396535616139373834326562633833386562643463613931 Dec 16 12:58:49.052000 audit: BPF prog-id=205 op=UNLOAD Dec 16 12:58:49.052000 audit[4497]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4486 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:49.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436633831396535616139373834326562633833386562643463613931 Dec 16 12:58:49.052000 audit: BPF prog-id=206 op=LOAD Dec 16 12:58:49.052000 audit[4497]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4486 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:49.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436633831396535616139373834326562633833386562643463613931 Dec 16 12:58:49.052000 audit: BPF prog-id=207 op=LOAD Dec 16 12:58:49.052000 audit[4497]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4486 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:49.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436633831396535616139373834326562633833386562643463613931 Dec 16 12:58:49.052000 audit: BPF prog-id=207 op=UNLOAD Dec 16 12:58:49.052000 audit[4497]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4486 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:49.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436633831396535616139373834326562633833386562643463613931 Dec 16 12:58:49.052000 audit: BPF prog-id=206 op=UNLOAD Dec 16 12:58:49.052000 audit[4497]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4486 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:49.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436633831396535616139373834326562633833386562643463613931 Dec 16 12:58:49.052000 audit: BPF prog-id=208 op=LOAD Dec 16 12:58:49.052000 audit[4497]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4486 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:49.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436633831396535616139373834326562633833386562643463613931 Dec 16 12:58:49.054753 systemd-resolved[1370]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:58:49.068804 systemd-networkd[1515]: cali25556e939d1: Link UP Dec 16 12:58:49.069083 systemd-networkd[1515]: cali25556e939d1: Gained carrier Dec 16 12:58:49.081857 containerd[1605]: time="2025-12-16T12:58:49.081782168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4lx5v,Uid:4e3b9dec-c7ec-4533-9b5f-135d8bcc981d,Namespace:calico-system,Attempt:0,} returns sandbox id \"46c819e5aa97842ebc838ebd4ca9184ba08ab9daf9f9a449a0edd06dcf5a8c72\"" Dec 16 12:58:49.084187 containerd[1605]: time="2025-12-16T12:58:49.084116960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:58:49.086239 containerd[1605]: 2025-12-16 12:58:48.892 [INFO][4431] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--fhwjm-eth0 coredns-674b8bbfcf- kube-system ab274844-97cc-406d-90f8-4c834c435e0c 873 0 2025-12-16 12:58:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-fhwjm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali25556e939d1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1598868bccde884a5d418e1c7fbd39a4fc2fd5e9b2568e043a5dd212c0123e53" Namespace="kube-system" Pod="coredns-674b8bbfcf-fhwjm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fhwjm-" Dec 16 12:58:49.086239 containerd[1605]: 2025-12-16 12:58:48.892 [INFO][4431] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1598868bccde884a5d418e1c7fbd39a4fc2fd5e9b2568e043a5dd212c0123e53" Namespace="kube-system" Pod="coredns-674b8bbfcf-fhwjm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fhwjm-eth0" Dec 16 12:58:49.086239 containerd[1605]: 2025-12-16 12:58:48.929 [INFO][4454] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1598868bccde884a5d418e1c7fbd39a4fc2fd5e9b2568e043a5dd212c0123e53" HandleID="k8s-pod-network.1598868bccde884a5d418e1c7fbd39a4fc2fd5e9b2568e043a5dd212c0123e53" Workload="localhost-k8s-coredns--674b8bbfcf--fhwjm-eth0" Dec 16 12:58:49.086432 containerd[1605]: 2025-12-16 12:58:48.930 [INFO][4454] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1598868bccde884a5d418e1c7fbd39a4fc2fd5e9b2568e043a5dd212c0123e53" HandleID="k8s-pod-network.1598868bccde884a5d418e1c7fbd39a4fc2fd5e9b2568e043a5dd212c0123e53" Workload="localhost-k8s-coredns--674b8bbfcf--fhwjm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138460), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-fhwjm", "timestamp":"2025-12-16 12:58:48.929974634 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:58:49.086432 containerd[1605]: 2025-12-16 12:58:48.930 [INFO][4454] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:58:49.086432 containerd[1605]: 2025-12-16 12:58:48.957 [INFO][4454] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:58:49.086432 containerd[1605]: 2025-12-16 12:58:48.957 [INFO][4454] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:58:49.086432 containerd[1605]: 2025-12-16 12:58:49.029 [INFO][4454] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1598868bccde884a5d418e1c7fbd39a4fc2fd5e9b2568e043a5dd212c0123e53" host="localhost" Dec 16 12:58:49.086432 containerd[1605]: 2025-12-16 12:58:49.034 [INFO][4454] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:58:49.086432 containerd[1605]: 2025-12-16 12:58:49.042 [INFO][4454] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:58:49.086432 containerd[1605]: 2025-12-16 12:58:49.044 [INFO][4454] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:58:49.086432 containerd[1605]: 2025-12-16 12:58:49.046 [INFO][4454] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:58:49.086432 containerd[1605]: 2025-12-16 12:58:49.046 [INFO][4454] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1598868bccde884a5d418e1c7fbd39a4fc2fd5e9b2568e043a5dd212c0123e53" host="localhost" Dec 16 12:58:49.086661 containerd[1605]: 2025-12-16 12:58:49.048 [INFO][4454] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1598868bccde884a5d418e1c7fbd39a4fc2fd5e9b2568e043a5dd212c0123e53 Dec 16 12:58:49.086661 containerd[1605]: 2025-12-16 12:58:49.054 [INFO][4454] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1598868bccde884a5d418e1c7fbd39a4fc2fd5e9b2568e043a5dd212c0123e53" host="localhost" Dec 16 12:58:49.086661 containerd[1605]: 2025-12-16 12:58:49.061 [INFO][4454] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.1598868bccde884a5d418e1c7fbd39a4fc2fd5e9b2568e043a5dd212c0123e53" host="localhost" Dec 16 12:58:49.086661 containerd[1605]: 2025-12-16 12:58:49.062 [INFO][4454] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.1598868bccde884a5d418e1c7fbd39a4fc2fd5e9b2568e043a5dd212c0123e53" host="localhost" Dec 16 12:58:49.086661 containerd[1605]: 2025-12-16 12:58:49.062 [INFO][4454] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:58:49.086661 containerd[1605]: 2025-12-16 12:58:49.062 [INFO][4454] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="1598868bccde884a5d418e1c7fbd39a4fc2fd5e9b2568e043a5dd212c0123e53" HandleID="k8s-pod-network.1598868bccde884a5d418e1c7fbd39a4fc2fd5e9b2568e043a5dd212c0123e53" Workload="localhost-k8s-coredns--674b8bbfcf--fhwjm-eth0" Dec 16 12:58:49.086915 containerd[1605]: 2025-12-16 12:58:49.066 [INFO][4431] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1598868bccde884a5d418e1c7fbd39a4fc2fd5e9b2568e043a5dd212c0123e53" Namespace="kube-system" Pod="coredns-674b8bbfcf-fhwjm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fhwjm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--fhwjm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ab274844-97cc-406d-90f8-4c834c435e0c", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 58, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-fhwjm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali25556e939d1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:58:49.087618 containerd[1605]: 2025-12-16 12:58:49.066 [INFO][4431] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="1598868bccde884a5d418e1c7fbd39a4fc2fd5e9b2568e043a5dd212c0123e53" Namespace="kube-system" Pod="coredns-674b8bbfcf-fhwjm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fhwjm-eth0" Dec 16 12:58:49.087618 containerd[1605]: 2025-12-16 12:58:49.066 [INFO][4431] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali25556e939d1 ContainerID="1598868bccde884a5d418e1c7fbd39a4fc2fd5e9b2568e043a5dd212c0123e53" Namespace="kube-system" Pod="coredns-674b8bbfcf-fhwjm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fhwjm-eth0" Dec 16 12:58:49.087618 containerd[1605]: 2025-12-16 12:58:49.070 [INFO][4431] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1598868bccde884a5d418e1c7fbd39a4fc2fd5e9b2568e043a5dd212c0123e53" Namespace="kube-system" Pod="coredns-674b8bbfcf-fhwjm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fhwjm-eth0" Dec 16 12:58:49.087708 containerd[1605]: 2025-12-16 12:58:49.070 [INFO][4431] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1598868bccde884a5d418e1c7fbd39a4fc2fd5e9b2568e043a5dd212c0123e53" Namespace="kube-system" Pod="coredns-674b8bbfcf-fhwjm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fhwjm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--fhwjm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ab274844-97cc-406d-90f8-4c834c435e0c", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 58, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1598868bccde884a5d418e1c7fbd39a4fc2fd5e9b2568e043a5dd212c0123e53", Pod:"coredns-674b8bbfcf-fhwjm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali25556e939d1", MAC:"c6:f9:cc:0a:60:6c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:58:49.087708 containerd[1605]: 2025-12-16 12:58:49.081 [INFO][4431] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1598868bccde884a5d418e1c7fbd39a4fc2fd5e9b2568e043a5dd212c0123e53" Namespace="kube-system" Pod="coredns-674b8bbfcf-fhwjm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fhwjm-eth0" Dec 16 12:58:49.107000 audit[4532]: NETFILTER_CFG table=filter:126 family=2 entries=46 op=nft_register_chain pid=4532 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:58:49.107000 audit[4532]: SYSCALL arch=c000003e syscall=46 success=yes exit=23740 a0=3 a1=7fff3f0d5770 a2=0 a3=7fff3f0d575c items=0 ppid=4329 pid=4532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:49.107000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:58:49.113650 containerd[1605]: time="2025-12-16T12:58:49.113579178Z" level=info msg="connecting to shim 1598868bccde884a5d418e1c7fbd39a4fc2fd5e9b2568e043a5dd212c0123e53" address="unix:///run/containerd/s/2c2f7d79e6ffdd110d13c65e03cae69500ecbfee1b01290c92a15795035ddd27" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:58:49.143126 systemd[1]: Started cri-containerd-1598868bccde884a5d418e1c7fbd39a4fc2fd5e9b2568e043a5dd212c0123e53.scope - libcontainer container 1598868bccde884a5d418e1c7fbd39a4fc2fd5e9b2568e043a5dd212c0123e53. Dec 16 12:58:49.160000 audit: BPF prog-id=209 op=LOAD Dec 16 12:58:49.161000 audit: BPF prog-id=210 op=LOAD Dec 16 12:58:49.161000 audit[4552]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4541 pid=4552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:49.161000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135393838363862636364653838346135643431386531633766626433 Dec 16 12:58:49.161000 audit: BPF prog-id=210 op=UNLOAD Dec 16 12:58:49.161000 audit[4552]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4541 pid=4552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:49.161000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135393838363862636364653838346135643431386531633766626433 Dec 16 12:58:49.161000 audit: BPF prog-id=211 op=LOAD Dec 16 12:58:49.161000 audit[4552]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4541 pid=4552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:49.161000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135393838363862636364653838346135643431386531633766626433 Dec 16 12:58:49.161000 audit: BPF prog-id=212 op=LOAD Dec 16 12:58:49.161000 audit[4552]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4541 pid=4552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:49.161000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135393838363862636364653838346135643431386531633766626433 Dec 16 12:58:49.161000 audit: BPF prog-id=212 op=UNLOAD Dec 16 12:58:49.161000 audit[4552]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4541 pid=4552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:49.161000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135393838363862636364653838346135643431386531633766626433 Dec 16 12:58:49.161000 audit: BPF prog-id=211 op=UNLOAD Dec 16 12:58:49.161000 audit[4552]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4541 pid=4552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:49.161000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135393838363862636364653838346135643431386531633766626433 Dec 16 12:58:49.161000 audit: BPF prog-id=213 op=LOAD Dec 16 12:58:49.161000 audit[4552]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4541 pid=4552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:49.161000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135393838363862636364653838346135643431386531633766626433 Dec 16 12:58:49.165639 systemd-resolved[1370]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:58:49.201494 containerd[1605]: time="2025-12-16T12:58:49.201447350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fhwjm,Uid:ab274844-97cc-406d-90f8-4c834c435e0c,Namespace:kube-system,Attempt:0,} returns sandbox id \"1598868bccde884a5d418e1c7fbd39a4fc2fd5e9b2568e043a5dd212c0123e53\"" Dec 16 12:58:49.202754 kubelet[2806]: E1216 12:58:49.202454 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:49.207890 containerd[1605]: time="2025-12-16T12:58:49.207695464Z" level=info msg="CreateContainer within sandbox \"1598868bccde884a5d418e1c7fbd39a4fc2fd5e9b2568e043a5dd212c0123e53\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:58:49.219333 containerd[1605]: time="2025-12-16T12:58:49.219260737Z" level=info msg="Container 757df9b8cf83df049a59337385cdd6f2e00b4d05dcd28c50366684cfef81e717: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:58:49.225646 containerd[1605]: time="2025-12-16T12:58:49.225582020Z" level=info msg="CreateContainer within sandbox \"1598868bccde884a5d418e1c7fbd39a4fc2fd5e9b2568e043a5dd212c0123e53\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"757df9b8cf83df049a59337385cdd6f2e00b4d05dcd28c50366684cfef81e717\"" Dec 16 12:58:49.226399 containerd[1605]: time="2025-12-16T12:58:49.226337748Z" level=info msg="StartContainer for \"757df9b8cf83df049a59337385cdd6f2e00b4d05dcd28c50366684cfef81e717\"" Dec 16 12:58:49.227547 containerd[1605]: time="2025-12-16T12:58:49.227517772Z" level=info msg="connecting to shim 757df9b8cf83df049a59337385cdd6f2e00b4d05dcd28c50366684cfef81e717" address="unix:///run/containerd/s/2c2f7d79e6ffdd110d13c65e03cae69500ecbfee1b01290c92a15795035ddd27" protocol=ttrpc version=3 Dec 16 12:58:49.243091 systemd-networkd[1515]: vxlan.calico: Gained IPv6LL Dec 16 12:58:49.252078 systemd[1]: Started cri-containerd-757df9b8cf83df049a59337385cdd6f2e00b4d05dcd28c50366684cfef81e717.scope - libcontainer container 757df9b8cf83df049a59337385cdd6f2e00b4d05dcd28c50366684cfef81e717. Dec 16 12:58:49.267000 audit: BPF prog-id=214 op=LOAD Dec 16 12:58:49.268000 audit: BPF prog-id=215 op=LOAD Dec 16 12:58:49.268000 audit[4577]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4541 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:49.268000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735376466396238636638336466303439613539333337333835636464 Dec 16 12:58:49.268000 audit: BPF prog-id=215 op=UNLOAD Dec 16 12:58:49.268000 audit[4577]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4541 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:49.268000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735376466396238636638336466303439613539333337333835636464 Dec 16 12:58:49.268000 audit: BPF prog-id=216 op=LOAD Dec 16 12:58:49.268000 audit[4577]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4541 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:49.268000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735376466396238636638336466303439613539333337333835636464 Dec 16 12:58:49.268000 audit: BPF prog-id=217 op=LOAD Dec 16 12:58:49.268000 audit[4577]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4541 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:49.268000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735376466396238636638336466303439613539333337333835636464 Dec 16 12:58:49.268000 audit: BPF prog-id=217 op=UNLOAD Dec 16 12:58:49.268000 audit[4577]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4541 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:49.268000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735376466396238636638336466303439613539333337333835636464 Dec 16 12:58:49.268000 audit: BPF prog-id=216 op=UNLOAD Dec 16 12:58:49.268000 audit[4577]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4541 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:49.268000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735376466396238636638336466303439613539333337333835636464 Dec 16 12:58:49.268000 audit: BPF prog-id=218 op=LOAD Dec 16 12:58:49.268000 audit[4577]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4541 pid=4577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:49.268000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735376466396238636638336466303439613539333337333835636464 Dec 16 12:58:49.290990 containerd[1605]: time="2025-12-16T12:58:49.290801590Z" level=info msg="StartContainer for \"757df9b8cf83df049a59337385cdd6f2e00b4d05dcd28c50366684cfef81e717\" returns successfully" Dec 16 12:58:49.457514 containerd[1605]: time="2025-12-16T12:58:49.457352745Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:58:49.464985 containerd[1605]: time="2025-12-16T12:58:49.464882156Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:58:49.464985 containerd[1605]: time="2025-12-16T12:58:49.464967496Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:58:49.466700 kubelet[2806]: E1216 12:58:49.466657 2806 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:58:49.466853 kubelet[2806]: E1216 12:58:49.466712 2806 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:58:49.466933 kubelet[2806]: E1216 12:58:49.466877 2806 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2zw4v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4lx5v_calico-system(4e3b9dec-c7ec-4533-9b5f-135d8bcc981d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:58:49.468967 containerd[1605]: time="2025-12-16T12:58:49.468936314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:58:49.796792 containerd[1605]: time="2025-12-16T12:58:49.796738218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c7fb6bf4b-pj7kl,Uid:ae64ff47-24ee-417f-a174-8a680294cf45,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:58:49.840269 systemd[1]: Started sshd@8-10.0.0.102:22-10.0.0.1:49842.service - OpenSSH per-connection server daemon (10.0.0.1:49842). Dec 16 12:58:49.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.102:22-10.0.0.1:49842 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:58:49.841230 containerd[1605]: time="2025-12-16T12:58:49.841180519Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:58:49.842473 containerd[1605]: time="2025-12-16T12:58:49.842432658Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:58:49.842544 containerd[1605]: time="2025-12-16T12:58:49.842513620Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:58:49.842741 kubelet[2806]: E1216 12:58:49.842691 2806 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:58:49.843453 kubelet[2806]: E1216 12:58:49.842769 2806 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:58:49.843453 kubelet[2806]: E1216 12:58:49.842951 2806 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2zw4v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4lx5v_calico-system(4e3b9dec-c7ec-4533-9b5f-135d8bcc981d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:58:49.844792 kubelet[2806]: E1216 12:58:49.844705 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4lx5v" podUID="4e3b9dec-c7ec-4533-9b5f-135d8bcc981d" Dec 16 12:58:49.924472 systemd-networkd[1515]: cali3c47d9a8439: Link UP Dec 16 12:58:49.924784 systemd-networkd[1515]: cali3c47d9a8439: Gained carrier Dec 16 12:58:49.934000 audit[4626]: USER_ACCT pid=4626 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:58:49.935464 sshd[4626]: Accepted publickey for core from 10.0.0.1 port 49842 ssh2: RSA SHA256:Nb0q9jxD4EhyUGvlUh0tlGIiDz42DR960XQ2mSo6eQ4 Dec 16 12:58:49.936000 audit[4626]: CRED_ACQ pid=4626 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:58:49.936000 audit[4626]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd383272b0 a2=3 a3=0 items=0 ppid=1 pid=4626 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:49.936000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:58:49.937959 sshd-session[4626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:58:49.940566 containerd[1605]: 2025-12-16 12:58:49.833 [INFO][4611] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5c7fb6bf4b--pj7kl-eth0 calico-apiserver-5c7fb6bf4b- calico-apiserver ae64ff47-24ee-417f-a174-8a680294cf45 870 0 2025-12-16 12:58:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c7fb6bf4b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5c7fb6bf4b-pj7kl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3c47d9a8439 [] [] }} ContainerID="7d342a7d8c177f2c712b3a5022ee46d06fb00f840b4933b246679973e4834196" Namespace="calico-apiserver" Pod="calico-apiserver-5c7fb6bf4b-pj7kl" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c7fb6bf4b--pj7kl-" Dec 16 12:58:49.940566 containerd[1605]: 2025-12-16 12:58:49.834 [INFO][4611] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7d342a7d8c177f2c712b3a5022ee46d06fb00f840b4933b246679973e4834196" Namespace="calico-apiserver" Pod="calico-apiserver-5c7fb6bf4b-pj7kl" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c7fb6bf4b--pj7kl-eth0" Dec 16 12:58:49.940566 containerd[1605]: 2025-12-16 12:58:49.873 [INFO][4628] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7d342a7d8c177f2c712b3a5022ee46d06fb00f840b4933b246679973e4834196" HandleID="k8s-pod-network.7d342a7d8c177f2c712b3a5022ee46d06fb00f840b4933b246679973e4834196" Workload="localhost-k8s-calico--apiserver--5c7fb6bf4b--pj7kl-eth0" Dec 16 12:58:49.940566 containerd[1605]: 2025-12-16 12:58:49.874 [INFO][4628] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7d342a7d8c177f2c712b3a5022ee46d06fb00f840b4933b246679973e4834196" HandleID="k8s-pod-network.7d342a7d8c177f2c712b3a5022ee46d06fb00f840b4933b246679973e4834196" Workload="localhost-k8s-calico--apiserver--5c7fb6bf4b--pj7kl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325390), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5c7fb6bf4b-pj7kl", "timestamp":"2025-12-16 12:58:49.873664046 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:58:49.940566 containerd[1605]: 2025-12-16 12:58:49.874 [INFO][4628] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:58:49.940566 containerd[1605]: 2025-12-16 12:58:49.874 [INFO][4628] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:58:49.940566 containerd[1605]: 2025-12-16 12:58:49.874 [INFO][4628] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:58:49.940566 containerd[1605]: 2025-12-16 12:58:49.884 [INFO][4628] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7d342a7d8c177f2c712b3a5022ee46d06fb00f840b4933b246679973e4834196" host="localhost" Dec 16 12:58:49.940566 containerd[1605]: 2025-12-16 12:58:49.891 [INFO][4628] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:58:49.940566 containerd[1605]: 2025-12-16 12:58:49.899 [INFO][4628] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:58:49.940566 containerd[1605]: 2025-12-16 12:58:49.901 [INFO][4628] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:58:49.940566 containerd[1605]: 2025-12-16 12:58:49.904 [INFO][4628] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:58:49.940566 containerd[1605]: 2025-12-16 12:58:49.904 [INFO][4628] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7d342a7d8c177f2c712b3a5022ee46d06fb00f840b4933b246679973e4834196" host="localhost" Dec 16 12:58:49.940566 containerd[1605]: 2025-12-16 12:58:49.905 [INFO][4628] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7d342a7d8c177f2c712b3a5022ee46d06fb00f840b4933b246679973e4834196 Dec 16 12:58:49.940566 containerd[1605]: 2025-12-16 12:58:49.909 [INFO][4628] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7d342a7d8c177f2c712b3a5022ee46d06fb00f840b4933b246679973e4834196" host="localhost" Dec 16 12:58:49.940566 containerd[1605]: 2025-12-16 12:58:49.917 [INFO][4628] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.7d342a7d8c177f2c712b3a5022ee46d06fb00f840b4933b246679973e4834196" host="localhost" Dec 16 12:58:49.940566 containerd[1605]: 2025-12-16 12:58:49.917 [INFO][4628] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.7d342a7d8c177f2c712b3a5022ee46d06fb00f840b4933b246679973e4834196" host="localhost" Dec 16 12:58:49.940566 containerd[1605]: 2025-12-16 12:58:49.917 [INFO][4628] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:58:49.940566 containerd[1605]: 2025-12-16 12:58:49.917 [INFO][4628] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="7d342a7d8c177f2c712b3a5022ee46d06fb00f840b4933b246679973e4834196" HandleID="k8s-pod-network.7d342a7d8c177f2c712b3a5022ee46d06fb00f840b4933b246679973e4834196" Workload="localhost-k8s-calico--apiserver--5c7fb6bf4b--pj7kl-eth0" Dec 16 12:58:49.941538 containerd[1605]: 2025-12-16 12:58:49.921 [INFO][4611] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7d342a7d8c177f2c712b3a5022ee46d06fb00f840b4933b246679973e4834196" Namespace="calico-apiserver" Pod="calico-apiserver-5c7fb6bf4b-pj7kl" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c7fb6bf4b--pj7kl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c7fb6bf4b--pj7kl-eth0", GenerateName:"calico-apiserver-5c7fb6bf4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"ae64ff47-24ee-417f-a174-8a680294cf45", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 58, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c7fb6bf4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5c7fb6bf4b-pj7kl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3c47d9a8439", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:58:49.941538 containerd[1605]: 2025-12-16 12:58:49.921 [INFO][4611] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="7d342a7d8c177f2c712b3a5022ee46d06fb00f840b4933b246679973e4834196" Namespace="calico-apiserver" Pod="calico-apiserver-5c7fb6bf4b-pj7kl" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c7fb6bf4b--pj7kl-eth0" Dec 16 12:58:49.941538 containerd[1605]: 2025-12-16 12:58:49.921 [INFO][4611] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3c47d9a8439 ContainerID="7d342a7d8c177f2c712b3a5022ee46d06fb00f840b4933b246679973e4834196" Namespace="calico-apiserver" Pod="calico-apiserver-5c7fb6bf4b-pj7kl" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c7fb6bf4b--pj7kl-eth0" Dec 16 12:58:49.941538 containerd[1605]: 2025-12-16 12:58:49.925 [INFO][4611] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7d342a7d8c177f2c712b3a5022ee46d06fb00f840b4933b246679973e4834196" Namespace="calico-apiserver" Pod="calico-apiserver-5c7fb6bf4b-pj7kl" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c7fb6bf4b--pj7kl-eth0" Dec 16 12:58:49.941538 containerd[1605]: 2025-12-16 12:58:49.926 [INFO][4611] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7d342a7d8c177f2c712b3a5022ee46d06fb00f840b4933b246679973e4834196" Namespace="calico-apiserver" Pod="calico-apiserver-5c7fb6bf4b-pj7kl" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c7fb6bf4b--pj7kl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c7fb6bf4b--pj7kl-eth0", GenerateName:"calico-apiserver-5c7fb6bf4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"ae64ff47-24ee-417f-a174-8a680294cf45", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 58, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c7fb6bf4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7d342a7d8c177f2c712b3a5022ee46d06fb00f840b4933b246679973e4834196", Pod:"calico-apiserver-5c7fb6bf4b-pj7kl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3c47d9a8439", MAC:"12:bd:97:df:89:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:58:49.941538 containerd[1605]: 2025-12-16 12:58:49.935 [INFO][4611] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7d342a7d8c177f2c712b3a5022ee46d06fb00f840b4933b246679973e4834196" Namespace="calico-apiserver" Pod="calico-apiserver-5c7fb6bf4b-pj7kl" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c7fb6bf4b--pj7kl-eth0" Dec 16 12:58:49.943424 systemd-logind[1584]: New session 9 of user core. Dec 16 12:58:49.952032 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 12:58:49.952000 audit[4647]: NETFILTER_CFG table=filter:127 family=2 entries=58 op=nft_register_chain pid=4647 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:58:49.952000 audit[4647]: SYSCALL arch=c000003e syscall=46 success=yes exit=30584 a0=3 a1=7ffe920172b0 a2=0 a3=7ffe9201729c items=0 ppid=4329 pid=4647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:49.952000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:58:49.954000 audit[4626]: USER_START pid=4626 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:58:49.963000 audit[4648]: CRED_ACQ pid=4648 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:58:49.966148 containerd[1605]: time="2025-12-16T12:58:49.966074952Z" level=info msg="connecting to shim 7d342a7d8c177f2c712b3a5022ee46d06fb00f840b4933b246679973e4834196" address="unix:///run/containerd/s/644649411170c072d4822d4d0601d7d918f5ec907659105ab18bd39c0ccf115a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:58:50.004094 systemd[1]: Started cri-containerd-7d342a7d8c177f2c712b3a5022ee46d06fb00f840b4933b246679973e4834196.scope - libcontainer container 7d342a7d8c177f2c712b3a5022ee46d06fb00f840b4933b246679973e4834196. Dec 16 12:58:50.018000 audit: BPF prog-id=219 op=LOAD Dec 16 12:58:50.019000 audit: BPF prog-id=220 op=LOAD Dec 16 12:58:50.019000 audit[4672]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4657 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:50.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764333432613764386331373766326337313262336135303232656534 Dec 16 12:58:50.019000 audit: BPF prog-id=220 op=UNLOAD Dec 16 12:58:50.019000 audit[4672]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4657 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:50.019000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764333432613764386331373766326337313262336135303232656534 Dec 16 12:58:50.020000 audit: BPF prog-id=221 op=LOAD Dec 16 12:58:50.020000 audit[4672]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4657 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:50.020000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764333432613764386331373766326337313262336135303232656534 Dec 16 12:58:50.020000 audit: BPF prog-id=222 op=LOAD Dec 16 12:58:50.020000 audit[4672]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4657 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:50.020000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764333432613764386331373766326337313262336135303232656534 Dec 16 12:58:50.020000 audit: BPF prog-id=222 op=UNLOAD Dec 16 12:58:50.020000 audit[4672]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4657 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:50.020000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764333432613764386331373766326337313262336135303232656534 Dec 16 12:58:50.020000 audit: BPF prog-id=221 op=UNLOAD Dec 16 12:58:50.020000 audit[4672]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4657 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:50.020000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764333432613764386331373766326337313262336135303232656534 Dec 16 12:58:50.020000 audit: BPF prog-id=223 op=LOAD Dec 16 12:58:50.020000 audit[4672]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4657 pid=4672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:50.020000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3764333432613764386331373766326337313262336135303232656534 Dec 16 12:58:50.022653 systemd-resolved[1370]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:58:50.061208 containerd[1605]: time="2025-12-16T12:58:50.061021232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c7fb6bf4b-pj7kl,Uid:ae64ff47-24ee-417f-a174-8a680294cf45,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7d342a7d8c177f2c712b3a5022ee46d06fb00f840b4933b246679973e4834196\"" Dec 16 12:58:50.063474 containerd[1605]: time="2025-12-16T12:58:50.063433228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:58:50.075736 sshd[4648]: Connection closed by 10.0.0.1 port 49842 Dec 16 12:58:50.075291 systemd-networkd[1515]: cali9b4d3d8693f: Gained IPv6LL Dec 16 12:58:50.075078 sshd-session[4626]: pam_unix(sshd:session): session closed for user core Dec 16 12:58:50.077000 audit[4626]: USER_END pid=4626 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:58:50.077000 audit[4626]: CRED_DISP pid=4626 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:58:50.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.102:22-10.0.0.1:49842 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:58:50.082039 systemd[1]: sshd@8-10.0.0.102:22-10.0.0.1:49842.service: Deactivated successfully. Dec 16 12:58:50.084709 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 12:58:50.085580 systemd-logind[1584]: Session 9 logged out. Waiting for processes to exit. Dec 16 12:58:50.086747 systemd-logind[1584]: Removed session 9. Dec 16 12:58:50.148221 kubelet[2806]: E1216 12:58:50.148183 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:50.164938 kubelet[2806]: I1216 12:58:50.163039 2806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-fhwjm" podStartSLOduration=42.163019692 podStartE2EDuration="42.163019692s" podCreationTimestamp="2025-12-16 12:58:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:58:50.158731756 +0000 UTC m=+46.451870250" watchObservedRunningTime="2025-12-16 12:58:50.163019692 +0000 UTC m=+46.456158176" Dec 16 12:58:50.174449 kubelet[2806]: E1216 12:58:50.174395 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4lx5v" podUID="4e3b9dec-c7ec-4533-9b5f-135d8bcc981d" Dec 16 12:58:50.190000 audit[4708]: NETFILTER_CFG table=filter:128 family=2 entries=20 op=nft_register_rule pid=4708 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:50.190000 audit[4708]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcbd782b80 a2=0 a3=7ffcbd782b6c items=0 ppid=2937 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:50.190000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:50.199000 audit[4708]: NETFILTER_CFG table=nat:129 family=2 entries=14 op=nft_register_rule pid=4708 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:50.199000 audit[4708]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcbd782b80 a2=0 a3=0 items=0 ppid=2937 pid=4708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:50.199000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:50.224000 audit[4710]: NETFILTER_CFG table=filter:130 family=2 entries=17 op=nft_register_rule pid=4710 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:50.224000 audit[4710]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdec767b30 a2=0 a3=7ffdec767b1c items=0 ppid=2937 pid=4710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:50.224000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:50.234000 audit[4710]: NETFILTER_CFG table=nat:131 family=2 entries=35 op=nft_register_chain pid=4710 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:50.234000 audit[4710]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffdec767b30 a2=0 a3=7ffdec767b1c items=0 ppid=2937 pid=4710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:50.234000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:50.430861 containerd[1605]: time="2025-12-16T12:58:50.430692810Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:58:50.431968 containerd[1605]: time="2025-12-16T12:58:50.431926456Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:58:50.432032 containerd[1605]: time="2025-12-16T12:58:50.431987520Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:58:50.432172 kubelet[2806]: E1216 12:58:50.432127 2806 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:58:50.432226 kubelet[2806]: E1216 12:58:50.432185 2806 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:58:50.432358 kubelet[2806]: E1216 12:58:50.432322 2806 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rv5ck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c7fb6bf4b-pj7kl_calico-apiserver(ae64ff47-24ee-417f-a174-8a680294cf45): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:58:50.433528 kubelet[2806]: E1216 12:58:50.433491 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c7fb6bf4b-pj7kl" podUID="ae64ff47-24ee-417f-a174-8a680294cf45" Dec 16 12:58:50.797097 containerd[1605]: time="2025-12-16T12:58:50.797042506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77fbcbddcb-x6lxz,Uid:1c71642e-2c37-4a5d-aec0-8a0d6c89217c,Namespace:calico-system,Attempt:0,}" Dec 16 12:58:50.797097 containerd[1605]: time="2025-12-16T12:58:50.797058376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7767f7c484-gsgtz,Uid:d3d0f018-dbb1-4af4-8317-c55456bbf69e,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:58:50.797374 containerd[1605]: time="2025-12-16T12:58:50.797042696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-r5stm,Uid:e710a919-c171-452f-a8e0-220cab9661a8,Namespace:calico-system,Attempt:0,}" Dec 16 12:58:50.923585 systemd-networkd[1515]: cali10983caa5d2: Link UP Dec 16 12:58:50.925126 systemd-networkd[1515]: cali10983caa5d2: Gained carrier Dec 16 12:58:50.940433 containerd[1605]: 2025-12-16 12:58:50.848 [INFO][4712] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--77fbcbddcb--x6lxz-eth0 calico-kube-controllers-77fbcbddcb- calico-system 1c71642e-2c37-4a5d-aec0-8a0d6c89217c 868 0 2025-12-16 12:58:26 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:77fbcbddcb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-77fbcbddcb-x6lxz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali10983caa5d2 [] [] }} ContainerID="5cf7ae06f764cb954fea877a129ecf0f401c1bf8e2df20911ffafffdbfbe5a5b" Namespace="calico-system" Pod="calico-kube-controllers-77fbcbddcb-x6lxz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77fbcbddcb--x6lxz-" Dec 16 12:58:50.940433 containerd[1605]: 2025-12-16 12:58:50.848 [INFO][4712] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5cf7ae06f764cb954fea877a129ecf0f401c1bf8e2df20911ffafffdbfbe5a5b" Namespace="calico-system" Pod="calico-kube-controllers-77fbcbddcb-x6lxz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77fbcbddcb--x6lxz-eth0" Dec 16 12:58:50.940433 containerd[1605]: 2025-12-16 12:58:50.886 [INFO][4755] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5cf7ae06f764cb954fea877a129ecf0f401c1bf8e2df20911ffafffdbfbe5a5b" HandleID="k8s-pod-network.5cf7ae06f764cb954fea877a129ecf0f401c1bf8e2df20911ffafffdbfbe5a5b" Workload="localhost-k8s-calico--kube--controllers--77fbcbddcb--x6lxz-eth0" Dec 16 12:58:50.940433 containerd[1605]: 2025-12-16 12:58:50.886 [INFO][4755] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5cf7ae06f764cb954fea877a129ecf0f401c1bf8e2df20911ffafffdbfbe5a5b" HandleID="k8s-pod-network.5cf7ae06f764cb954fea877a129ecf0f401c1bf8e2df20911ffafffdbfbe5a5b" Workload="localhost-k8s-calico--kube--controllers--77fbcbddcb--x6lxz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f720), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-77fbcbddcb-x6lxz", "timestamp":"2025-12-16 12:58:50.886283227 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:58:50.940433 containerd[1605]: 2025-12-16 12:58:50.886 [INFO][4755] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:58:50.940433 containerd[1605]: 2025-12-16 12:58:50.886 [INFO][4755] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:58:50.940433 containerd[1605]: 2025-12-16 12:58:50.886 [INFO][4755] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:58:50.940433 containerd[1605]: 2025-12-16 12:58:50.896 [INFO][4755] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5cf7ae06f764cb954fea877a129ecf0f401c1bf8e2df20911ffafffdbfbe5a5b" host="localhost" Dec 16 12:58:50.940433 containerd[1605]: 2025-12-16 12:58:50.900 [INFO][4755] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:58:50.940433 containerd[1605]: 2025-12-16 12:58:50.903 [INFO][4755] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:58:50.940433 containerd[1605]: 2025-12-16 12:58:50.904 [INFO][4755] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:58:50.940433 containerd[1605]: 2025-12-16 12:58:50.906 [INFO][4755] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:58:50.940433 containerd[1605]: 2025-12-16 12:58:50.906 [INFO][4755] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5cf7ae06f764cb954fea877a129ecf0f401c1bf8e2df20911ffafffdbfbe5a5b" host="localhost" Dec 16 12:58:50.940433 containerd[1605]: 2025-12-16 12:58:50.907 [INFO][4755] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5cf7ae06f764cb954fea877a129ecf0f401c1bf8e2df20911ffafffdbfbe5a5b Dec 16 12:58:50.940433 containerd[1605]: 2025-12-16 12:58:50.911 [INFO][4755] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5cf7ae06f764cb954fea877a129ecf0f401c1bf8e2df20911ffafffdbfbe5a5b" host="localhost" Dec 16 12:58:50.940433 containerd[1605]: 2025-12-16 12:58:50.917 [INFO][4755] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.5cf7ae06f764cb954fea877a129ecf0f401c1bf8e2df20911ffafffdbfbe5a5b" host="localhost" Dec 16 12:58:50.940433 containerd[1605]: 2025-12-16 12:58:50.917 [INFO][4755] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.5cf7ae06f764cb954fea877a129ecf0f401c1bf8e2df20911ffafffdbfbe5a5b" host="localhost" Dec 16 12:58:50.940433 containerd[1605]: 2025-12-16 12:58:50.917 [INFO][4755] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:58:50.940433 containerd[1605]: 2025-12-16 12:58:50.917 [INFO][4755] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="5cf7ae06f764cb954fea877a129ecf0f401c1bf8e2df20911ffafffdbfbe5a5b" HandleID="k8s-pod-network.5cf7ae06f764cb954fea877a129ecf0f401c1bf8e2df20911ffafffdbfbe5a5b" Workload="localhost-k8s-calico--kube--controllers--77fbcbddcb--x6lxz-eth0" Dec 16 12:58:50.941440 containerd[1605]: 2025-12-16 12:58:50.919 [INFO][4712] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5cf7ae06f764cb954fea877a129ecf0f401c1bf8e2df20911ffafffdbfbe5a5b" Namespace="calico-system" Pod="calico-kube-controllers-77fbcbddcb-x6lxz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77fbcbddcb--x6lxz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--77fbcbddcb--x6lxz-eth0", GenerateName:"calico-kube-controllers-77fbcbddcb-", Namespace:"calico-system", SelfLink:"", UID:"1c71642e-2c37-4a5d-aec0-8a0d6c89217c", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 58, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77fbcbddcb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-77fbcbddcb-x6lxz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali10983caa5d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:58:50.941440 containerd[1605]: 2025-12-16 12:58:50.919 [INFO][4712] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="5cf7ae06f764cb954fea877a129ecf0f401c1bf8e2df20911ffafffdbfbe5a5b" Namespace="calico-system" Pod="calico-kube-controllers-77fbcbddcb-x6lxz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77fbcbddcb--x6lxz-eth0" Dec 16 12:58:50.941440 containerd[1605]: 2025-12-16 12:58:50.919 [INFO][4712] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali10983caa5d2 ContainerID="5cf7ae06f764cb954fea877a129ecf0f401c1bf8e2df20911ffafffdbfbe5a5b" Namespace="calico-system" Pod="calico-kube-controllers-77fbcbddcb-x6lxz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77fbcbddcb--x6lxz-eth0" Dec 16 12:58:50.941440 containerd[1605]: 2025-12-16 12:58:50.925 [INFO][4712] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5cf7ae06f764cb954fea877a129ecf0f401c1bf8e2df20911ffafffdbfbe5a5b" Namespace="calico-system" Pod="calico-kube-controllers-77fbcbddcb-x6lxz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77fbcbddcb--x6lxz-eth0" Dec 16 12:58:50.941440 containerd[1605]: 2025-12-16 12:58:50.926 [INFO][4712] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5cf7ae06f764cb954fea877a129ecf0f401c1bf8e2df20911ffafffdbfbe5a5b" Namespace="calico-system" Pod="calico-kube-controllers-77fbcbddcb-x6lxz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77fbcbddcb--x6lxz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--77fbcbddcb--x6lxz-eth0", GenerateName:"calico-kube-controllers-77fbcbddcb-", Namespace:"calico-system", SelfLink:"", UID:"1c71642e-2c37-4a5d-aec0-8a0d6c89217c", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 58, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77fbcbddcb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5cf7ae06f764cb954fea877a129ecf0f401c1bf8e2df20911ffafffdbfbe5a5b", Pod:"calico-kube-controllers-77fbcbddcb-x6lxz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali10983caa5d2", MAC:"82:06:7f:07:91:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:58:50.941440 containerd[1605]: 2025-12-16 12:58:50.937 [INFO][4712] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5cf7ae06f764cb954fea877a129ecf0f401c1bf8e2df20911ffafffdbfbe5a5b" Namespace="calico-system" Pod="calico-kube-controllers-77fbcbddcb-x6lxz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77fbcbddcb--x6lxz-eth0" Dec 16 12:58:50.952000 audit[4788]: NETFILTER_CFG table=filter:132 family=2 entries=48 op=nft_register_chain pid=4788 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:58:50.952000 audit[4788]: SYSCALL arch=c000003e syscall=46 success=yes exit=23140 a0=3 a1=7ffdffe3da50 a2=0 a3=7ffdffe3da3c items=0 ppid=4329 pid=4788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:50.952000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:58:50.960868 containerd[1605]: time="2025-12-16T12:58:50.960793756Z" level=info msg="connecting to shim 5cf7ae06f764cb954fea877a129ecf0f401c1bf8e2df20911ffafffdbfbe5a5b" address="unix:///run/containerd/s/ac61b95277bb6b39c2ab9c51d6bbdefdb65ed2ed3fee85c5970d028851743f3a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:58:50.992009 systemd[1]: Started cri-containerd-5cf7ae06f764cb954fea877a129ecf0f401c1bf8e2df20911ffafffdbfbe5a5b.scope - libcontainer container 5cf7ae06f764cb954fea877a129ecf0f401c1bf8e2df20911ffafffdbfbe5a5b. Dec 16 12:58:51.004000 audit: BPF prog-id=224 op=LOAD Dec 16 12:58:51.005000 audit: BPF prog-id=225 op=LOAD Dec 16 12:58:51.005000 audit[4809]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4796 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:51.005000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563663761653036663736346362393534666561383737613132396563 Dec 16 12:58:51.005000 audit: BPF prog-id=225 op=UNLOAD Dec 16 12:58:51.005000 audit[4809]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4796 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:51.005000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563663761653036663736346362393534666561383737613132396563 Dec 16 12:58:51.005000 audit: BPF prog-id=226 op=LOAD Dec 16 12:58:51.005000 audit[4809]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4796 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:51.005000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563663761653036663736346362393534666561383737613132396563 Dec 16 12:58:51.005000 audit: BPF prog-id=227 op=LOAD Dec 16 12:58:51.005000 audit[4809]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4796 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:51.005000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563663761653036663736346362393534666561383737613132396563 Dec 16 12:58:51.005000 audit: BPF prog-id=227 op=UNLOAD Dec 16 12:58:51.005000 audit[4809]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4796 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:51.005000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563663761653036663736346362393534666561383737613132396563 Dec 16 12:58:51.005000 audit: BPF prog-id=226 op=UNLOAD Dec 16 12:58:51.005000 audit[4809]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4796 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:51.005000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563663761653036663736346362393534666561383737613132396563 Dec 16 12:58:51.005000 audit: BPF prog-id=228 op=LOAD Dec 16 12:58:51.005000 audit[4809]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4796 pid=4809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:51.005000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563663761653036663736346362393534666561383737613132396563 Dec 16 12:58:51.007623 systemd-resolved[1370]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:58:51.034454 systemd-networkd[1515]: cali66e09184c2a: Link UP Dec 16 12:58:51.034708 systemd-networkd[1515]: cali66e09184c2a: Gained carrier Dec 16 12:58:51.054705 containerd[1605]: 2025-12-16 12:58:50.865 [INFO][4720] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7767f7c484--gsgtz-eth0 calico-apiserver-7767f7c484- calico-apiserver d3d0f018-dbb1-4af4-8317-c55456bbf69e 876 0 2025-12-16 12:58:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7767f7c484 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7767f7c484-gsgtz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali66e09184c2a [] [] }} ContainerID="5e3fdf9e63a1c19c1bbc76e829d0515a833c3b2a567bf60a1d99ec55b373fb6c" Namespace="calico-apiserver" Pod="calico-apiserver-7767f7c484-gsgtz" WorkloadEndpoint="localhost-k8s-calico--apiserver--7767f7c484--gsgtz-" Dec 16 12:58:51.054705 containerd[1605]: 2025-12-16 12:58:50.865 [INFO][4720] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5e3fdf9e63a1c19c1bbc76e829d0515a833c3b2a567bf60a1d99ec55b373fb6c" Namespace="calico-apiserver" Pod="calico-apiserver-7767f7c484-gsgtz" WorkloadEndpoint="localhost-k8s-calico--apiserver--7767f7c484--gsgtz-eth0" Dec 16 12:58:51.054705 containerd[1605]: 2025-12-16 12:58:50.902 [INFO][4768] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5e3fdf9e63a1c19c1bbc76e829d0515a833c3b2a567bf60a1d99ec55b373fb6c" HandleID="k8s-pod-network.5e3fdf9e63a1c19c1bbc76e829d0515a833c3b2a567bf60a1d99ec55b373fb6c" Workload="localhost-k8s-calico--apiserver--7767f7c484--gsgtz-eth0" Dec 16 12:58:51.054705 containerd[1605]: 2025-12-16 12:58:50.902 [INFO][4768] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5e3fdf9e63a1c19c1bbc76e829d0515a833c3b2a567bf60a1d99ec55b373fb6c" HandleID="k8s-pod-network.5e3fdf9e63a1c19c1bbc76e829d0515a833c3b2a567bf60a1d99ec55b373fb6c" Workload="localhost-k8s-calico--apiserver--7767f7c484--gsgtz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d6fd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7767f7c484-gsgtz", "timestamp":"2025-12-16 12:58:50.902034053 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:58:51.054705 containerd[1605]: 2025-12-16 12:58:50.902 [INFO][4768] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:58:51.054705 containerd[1605]: 2025-12-16 12:58:50.917 [INFO][4768] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:58:51.054705 containerd[1605]: 2025-12-16 12:58:50.917 [INFO][4768] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:58:51.054705 containerd[1605]: 2025-12-16 12:58:50.998 [INFO][4768] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5e3fdf9e63a1c19c1bbc76e829d0515a833c3b2a567bf60a1d99ec55b373fb6c" host="localhost" Dec 16 12:58:51.054705 containerd[1605]: 2025-12-16 12:58:51.004 [INFO][4768] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:58:51.054705 containerd[1605]: 2025-12-16 12:58:51.007 [INFO][4768] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:58:51.054705 containerd[1605]: 2025-12-16 12:58:51.009 [INFO][4768] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:58:51.054705 containerd[1605]: 2025-12-16 12:58:51.011 [INFO][4768] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:58:51.054705 containerd[1605]: 2025-12-16 12:58:51.011 [INFO][4768] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5e3fdf9e63a1c19c1bbc76e829d0515a833c3b2a567bf60a1d99ec55b373fb6c" host="localhost" Dec 16 12:58:51.054705 containerd[1605]: 2025-12-16 12:58:51.012 [INFO][4768] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5e3fdf9e63a1c19c1bbc76e829d0515a833c3b2a567bf60a1d99ec55b373fb6c Dec 16 12:58:51.054705 containerd[1605]: 2025-12-16 12:58:51.016 [INFO][4768] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5e3fdf9e63a1c19c1bbc76e829d0515a833c3b2a567bf60a1d99ec55b373fb6c" host="localhost" Dec 16 12:58:51.054705 containerd[1605]: 2025-12-16 12:58:51.022 [INFO][4768] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.5e3fdf9e63a1c19c1bbc76e829d0515a833c3b2a567bf60a1d99ec55b373fb6c" host="localhost" Dec 16 12:58:51.054705 containerd[1605]: 2025-12-16 12:58:51.023 [INFO][4768] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.5e3fdf9e63a1c19c1bbc76e829d0515a833c3b2a567bf60a1d99ec55b373fb6c" host="localhost" Dec 16 12:58:51.054705 containerd[1605]: 2025-12-16 12:58:51.023 [INFO][4768] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:58:51.054705 containerd[1605]: 2025-12-16 12:58:51.024 [INFO][4768] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="5e3fdf9e63a1c19c1bbc76e829d0515a833c3b2a567bf60a1d99ec55b373fb6c" HandleID="k8s-pod-network.5e3fdf9e63a1c19c1bbc76e829d0515a833c3b2a567bf60a1d99ec55b373fb6c" Workload="localhost-k8s-calico--apiserver--7767f7c484--gsgtz-eth0" Dec 16 12:58:51.055880 containerd[1605]: 2025-12-16 12:58:51.031 [INFO][4720] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5e3fdf9e63a1c19c1bbc76e829d0515a833c3b2a567bf60a1d99ec55b373fb6c" Namespace="calico-apiserver" Pod="calico-apiserver-7767f7c484-gsgtz" WorkloadEndpoint="localhost-k8s-calico--apiserver--7767f7c484--gsgtz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7767f7c484--gsgtz-eth0", GenerateName:"calico-apiserver-7767f7c484-", Namespace:"calico-apiserver", SelfLink:"", UID:"d3d0f018-dbb1-4af4-8317-c55456bbf69e", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 58, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7767f7c484", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7767f7c484-gsgtz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali66e09184c2a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:58:51.055880 containerd[1605]: 2025-12-16 12:58:51.031 [INFO][4720] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="5e3fdf9e63a1c19c1bbc76e829d0515a833c3b2a567bf60a1d99ec55b373fb6c" Namespace="calico-apiserver" Pod="calico-apiserver-7767f7c484-gsgtz" WorkloadEndpoint="localhost-k8s-calico--apiserver--7767f7c484--gsgtz-eth0" Dec 16 12:58:51.055880 containerd[1605]: 2025-12-16 12:58:51.031 [INFO][4720] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali66e09184c2a ContainerID="5e3fdf9e63a1c19c1bbc76e829d0515a833c3b2a567bf60a1d99ec55b373fb6c" Namespace="calico-apiserver" Pod="calico-apiserver-7767f7c484-gsgtz" WorkloadEndpoint="localhost-k8s-calico--apiserver--7767f7c484--gsgtz-eth0" Dec 16 12:58:51.055880 containerd[1605]: 2025-12-16 12:58:51.036 [INFO][4720] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5e3fdf9e63a1c19c1bbc76e829d0515a833c3b2a567bf60a1d99ec55b373fb6c" Namespace="calico-apiserver" Pod="calico-apiserver-7767f7c484-gsgtz" WorkloadEndpoint="localhost-k8s-calico--apiserver--7767f7c484--gsgtz-eth0" Dec 16 12:58:51.055880 containerd[1605]: 2025-12-16 12:58:51.036 [INFO][4720] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5e3fdf9e63a1c19c1bbc76e829d0515a833c3b2a567bf60a1d99ec55b373fb6c" Namespace="calico-apiserver" Pod="calico-apiserver-7767f7c484-gsgtz" WorkloadEndpoint="localhost-k8s-calico--apiserver--7767f7c484--gsgtz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7767f7c484--gsgtz-eth0", GenerateName:"calico-apiserver-7767f7c484-", Namespace:"calico-apiserver", SelfLink:"", UID:"d3d0f018-dbb1-4af4-8317-c55456bbf69e", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 58, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7767f7c484", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5e3fdf9e63a1c19c1bbc76e829d0515a833c3b2a567bf60a1d99ec55b373fb6c", Pod:"calico-apiserver-7767f7c484-gsgtz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali66e09184c2a", MAC:"3e:f6:ae:88:63:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:58:51.055880 containerd[1605]: 2025-12-16 12:58:51.050 [INFO][4720] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5e3fdf9e63a1c19c1bbc76e829d0515a833c3b2a567bf60a1d99ec55b373fb6c" Namespace="calico-apiserver" Pod="calico-apiserver-7767f7c484-gsgtz" WorkloadEndpoint="localhost-k8s-calico--apiserver--7767f7c484--gsgtz-eth0" Dec 16 12:58:51.058786 containerd[1605]: time="2025-12-16T12:58:51.058752832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77fbcbddcb-x6lxz,Uid:1c71642e-2c37-4a5d-aec0-8a0d6c89217c,Namespace:calico-system,Attempt:0,} returns sandbox id \"5cf7ae06f764cb954fea877a129ecf0f401c1bf8e2df20911ffafffdbfbe5a5b\"" Dec 16 12:58:51.060809 containerd[1605]: time="2025-12-16T12:58:51.060162407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:58:51.068000 audit[4842]: NETFILTER_CFG table=filter:133 family=2 entries=53 op=nft_register_chain pid=4842 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:58:51.068000 audit[4842]: SYSCALL arch=c000003e syscall=46 success=yes exit=26640 a0=3 a1=7fff68b47370 a2=0 a3=7fff68b4735c items=0 ppid=4329 pid=4842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:51.068000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:58:51.077074 containerd[1605]: time="2025-12-16T12:58:51.077023586Z" level=info msg="connecting to shim 5e3fdf9e63a1c19c1bbc76e829d0515a833c3b2a567bf60a1d99ec55b373fb6c" address="unix:///run/containerd/s/10de2997840c786fb2a064882521c1ca58499ab1cbb2b89861da7a7ab7a65829" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:58:51.099121 systemd-networkd[1515]: cali25556e939d1: Gained IPv6LL Dec 16 12:58:51.109087 systemd[1]: Started cri-containerd-5e3fdf9e63a1c19c1bbc76e829d0515a833c3b2a567bf60a1d99ec55b373fb6c.scope - libcontainer container 5e3fdf9e63a1c19c1bbc76e829d0515a833c3b2a567bf60a1d99ec55b373fb6c. Dec 16 12:58:51.125000 audit: BPF prog-id=229 op=LOAD Dec 16 12:58:51.126000 audit: BPF prog-id=230 op=LOAD Dec 16 12:58:51.126000 audit[4863]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4851 pid=4863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:51.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565336664663965363361316331396331626263373665383239643035 Dec 16 12:58:51.126000 audit: BPF prog-id=230 op=UNLOAD Dec 16 12:58:51.126000 audit[4863]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4851 pid=4863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:51.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565336664663965363361316331396331626263373665383239643035 Dec 16 12:58:51.126000 audit: BPF prog-id=231 op=LOAD Dec 16 12:58:51.126000 audit[4863]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4851 pid=4863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:51.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565336664663965363361316331396331626263373665383239643035 Dec 16 12:58:51.126000 audit: BPF prog-id=232 op=LOAD Dec 16 12:58:51.126000 audit[4863]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4851 pid=4863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:51.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565336664663965363361316331396331626263373665383239643035 Dec 16 12:58:51.126000 audit: BPF prog-id=232 op=UNLOAD Dec 16 12:58:51.126000 audit[4863]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4851 pid=4863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:51.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565336664663965363361316331396331626263373665383239643035 Dec 16 12:58:51.126000 audit: BPF prog-id=231 op=UNLOAD Dec 16 12:58:51.126000 audit[4863]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4851 pid=4863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:51.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565336664663965363361316331396331626263373665383239643035 Dec 16 12:58:51.126000 audit: BPF prog-id=233 op=LOAD Dec 16 12:58:51.126000 audit[4863]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4851 pid=4863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:51.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565336664663965363361316331396331626263373665383239643035 Dec 16 12:58:51.128883 systemd-resolved[1370]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:58:51.137486 systemd-networkd[1515]: calib83695efda4: Link UP Dec 16 12:58:51.138455 systemd-networkd[1515]: calib83695efda4: Gained carrier Dec 16 12:58:51.157020 kubelet[2806]: E1216 12:58:51.156664 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:51.157020 kubelet[2806]: E1216 12:58:51.156951 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c7fb6bf4b-pj7kl" podUID="ae64ff47-24ee-417f-a174-8a680294cf45" Dec 16 12:58:51.158337 containerd[1605]: 2025-12-16 12:58:50.861 [INFO][4734] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--r5stm-eth0 goldmane-666569f655- calico-system e710a919-c171-452f-a8e0-220cab9661a8 872 0 2025-12-16 12:58:24 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-r5stm eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib83695efda4 [] [] }} ContainerID="b40166fcc63f42f6fccd60ff218ed21ec1b22933870751b75ff2727877d05651" Namespace="calico-system" Pod="goldmane-666569f655-r5stm" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--r5stm-" Dec 16 12:58:51.158337 containerd[1605]: 2025-12-16 12:58:50.861 [INFO][4734] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b40166fcc63f42f6fccd60ff218ed21ec1b22933870751b75ff2727877d05651" Namespace="calico-system" Pod="goldmane-666569f655-r5stm" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--r5stm-eth0" Dec 16 12:58:51.158337 containerd[1605]: 2025-12-16 12:58:50.902 [INFO][4761] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b40166fcc63f42f6fccd60ff218ed21ec1b22933870751b75ff2727877d05651" HandleID="k8s-pod-network.b40166fcc63f42f6fccd60ff218ed21ec1b22933870751b75ff2727877d05651" Workload="localhost-k8s-goldmane--666569f655--r5stm-eth0" Dec 16 12:58:51.158337 containerd[1605]: 2025-12-16 12:58:50.902 [INFO][4761] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b40166fcc63f42f6fccd60ff218ed21ec1b22933870751b75ff2727877d05651" HandleID="k8s-pod-network.b40166fcc63f42f6fccd60ff218ed21ec1b22933870751b75ff2727877d05651" Workload="localhost-k8s-goldmane--666569f655--r5stm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000b9630), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-r5stm", "timestamp":"2025-12-16 12:58:50.90234154 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:58:51.158337 containerd[1605]: 2025-12-16 12:58:50.902 [INFO][4761] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:58:51.158337 containerd[1605]: 2025-12-16 12:58:51.024 [INFO][4761] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:58:51.158337 containerd[1605]: 2025-12-16 12:58:51.024 [INFO][4761] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:58:51.158337 containerd[1605]: 2025-12-16 12:58:51.100 [INFO][4761] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b40166fcc63f42f6fccd60ff218ed21ec1b22933870751b75ff2727877d05651" host="localhost" Dec 16 12:58:51.158337 containerd[1605]: 2025-12-16 12:58:51.105 [INFO][4761] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:58:51.158337 containerd[1605]: 2025-12-16 12:58:51.112 [INFO][4761] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:58:51.158337 containerd[1605]: 2025-12-16 12:58:51.117 [INFO][4761] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:58:51.158337 containerd[1605]: 2025-12-16 12:58:51.119 [INFO][4761] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:58:51.158337 containerd[1605]: 2025-12-16 12:58:51.119 [INFO][4761] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b40166fcc63f42f6fccd60ff218ed21ec1b22933870751b75ff2727877d05651" host="localhost" Dec 16 12:58:51.158337 containerd[1605]: 2025-12-16 12:58:51.120 [INFO][4761] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b40166fcc63f42f6fccd60ff218ed21ec1b22933870751b75ff2727877d05651 Dec 16 12:58:51.158337 containerd[1605]: 2025-12-16 12:58:51.123 [INFO][4761] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b40166fcc63f42f6fccd60ff218ed21ec1b22933870751b75ff2727877d05651" host="localhost" Dec 16 12:58:51.158337 containerd[1605]: 2025-12-16 12:58:51.130 [INFO][4761] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.b40166fcc63f42f6fccd60ff218ed21ec1b22933870751b75ff2727877d05651" host="localhost" Dec 16 12:58:51.158337 containerd[1605]: 2025-12-16 12:58:51.130 [INFO][4761] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.b40166fcc63f42f6fccd60ff218ed21ec1b22933870751b75ff2727877d05651" host="localhost" Dec 16 12:58:51.158337 containerd[1605]: 2025-12-16 12:58:51.130 [INFO][4761] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:58:51.158337 containerd[1605]: 2025-12-16 12:58:51.130 [INFO][4761] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="b40166fcc63f42f6fccd60ff218ed21ec1b22933870751b75ff2727877d05651" HandleID="k8s-pod-network.b40166fcc63f42f6fccd60ff218ed21ec1b22933870751b75ff2727877d05651" Workload="localhost-k8s-goldmane--666569f655--r5stm-eth0" Dec 16 12:58:51.159354 containerd[1605]: 2025-12-16 12:58:51.133 [INFO][4734] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b40166fcc63f42f6fccd60ff218ed21ec1b22933870751b75ff2727877d05651" Namespace="calico-system" Pod="goldmane-666569f655-r5stm" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--r5stm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--r5stm-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e710a919-c171-452f-a8e0-220cab9661a8", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 58, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-r5stm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib83695efda4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:58:51.159354 containerd[1605]: 2025-12-16 12:58:51.134 [INFO][4734] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="b40166fcc63f42f6fccd60ff218ed21ec1b22933870751b75ff2727877d05651" Namespace="calico-system" Pod="goldmane-666569f655-r5stm" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--r5stm-eth0" Dec 16 12:58:51.159354 containerd[1605]: 2025-12-16 12:58:51.134 [INFO][4734] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib83695efda4 ContainerID="b40166fcc63f42f6fccd60ff218ed21ec1b22933870751b75ff2727877d05651" Namespace="calico-system" Pod="goldmane-666569f655-r5stm" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--r5stm-eth0" Dec 16 12:58:51.159354 containerd[1605]: 2025-12-16 12:58:51.139 [INFO][4734] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b40166fcc63f42f6fccd60ff218ed21ec1b22933870751b75ff2727877d05651" Namespace="calico-system" Pod="goldmane-666569f655-r5stm" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--r5stm-eth0" Dec 16 12:58:51.159354 containerd[1605]: 2025-12-16 12:58:51.139 [INFO][4734] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b40166fcc63f42f6fccd60ff218ed21ec1b22933870751b75ff2727877d05651" Namespace="calico-system" Pod="goldmane-666569f655-r5stm" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--r5stm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--r5stm-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e710a919-c171-452f-a8e0-220cab9661a8", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 58, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b40166fcc63f42f6fccd60ff218ed21ec1b22933870751b75ff2727877d05651", Pod:"goldmane-666569f655-r5stm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib83695efda4", MAC:"a6:61:c2:25:b6:c1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:58:51.159354 containerd[1605]: 2025-12-16 12:58:51.149 [INFO][4734] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b40166fcc63f42f6fccd60ff218ed21ec1b22933870751b75ff2727877d05651" Namespace="calico-system" Pod="goldmane-666569f655-r5stm" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--r5stm-eth0" Dec 16 12:58:51.180152 containerd[1605]: time="2025-12-16T12:58:51.180108627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7767f7c484-gsgtz,Uid:d3d0f018-dbb1-4af4-8317-c55456bbf69e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5e3fdf9e63a1c19c1bbc76e829d0515a833c3b2a567bf60a1d99ec55b373fb6c\"" Dec 16 12:58:51.188000 audit[4899]: NETFILTER_CFG table=filter:134 family=2 entries=64 op=nft_register_chain pid=4899 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:58:51.188000 audit[4899]: SYSCALL arch=c000003e syscall=46 success=yes exit=31120 a0=3 a1=7ffc6cebccf0 a2=0 a3=7ffc6cebccdc items=0 ppid=4329 pid=4899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:51.188000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:58:51.192000 audit[4900]: NETFILTER_CFG table=filter:135 family=2 entries=14 op=nft_register_rule pid=4900 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:51.192000 audit[4900]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff90f18a90 a2=0 a3=7fff90f18a7c items=0 ppid=2937 pid=4900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:51.192000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:51.199206 containerd[1605]: time="2025-12-16T12:58:51.199153925Z" level=info msg="connecting to shim b40166fcc63f42f6fccd60ff218ed21ec1b22933870751b75ff2727877d05651" address="unix:///run/containerd/s/1da314e2bd109401a0113c689062bcc2493381426576f97651c15fc307e4c4f1" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:58:51.199000 audit[4900]: NETFILTER_CFG table=nat:136 family=2 entries=20 op=nft_register_rule pid=4900 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:51.199000 audit[4900]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff90f18a90 a2=0 a3=7fff90f18a7c items=0 ppid=2937 pid=4900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:51.199000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:51.231004 systemd[1]: Started cri-containerd-b40166fcc63f42f6fccd60ff218ed21ec1b22933870751b75ff2727877d05651.scope - libcontainer container b40166fcc63f42f6fccd60ff218ed21ec1b22933870751b75ff2727877d05651. Dec 16 12:58:51.242000 audit: BPF prog-id=234 op=LOAD Dec 16 12:58:51.243000 audit: BPF prog-id=235 op=LOAD Dec 16 12:58:51.243000 audit[4921]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=4909 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:51.243000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234303136366663633633663432663666636364363066663231386564 Dec 16 12:58:51.243000 audit: BPF prog-id=235 op=UNLOAD Dec 16 12:58:51.243000 audit[4921]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4909 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:51.243000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234303136366663633633663432663666636364363066663231386564 Dec 16 12:58:51.243000 audit: BPF prog-id=236 op=LOAD Dec 16 12:58:51.243000 audit[4921]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=4909 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:51.243000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234303136366663633633663432663666636364363066663231386564 Dec 16 12:58:51.243000 audit: BPF prog-id=237 op=LOAD Dec 16 12:58:51.243000 audit[4921]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=4909 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:51.243000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234303136366663633633663432663666636364363066663231386564 Dec 16 12:58:51.243000 audit: BPF prog-id=237 op=UNLOAD Dec 16 12:58:51.243000 audit[4921]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4909 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:51.243000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234303136366663633633663432663666636364363066663231386564 Dec 16 12:58:51.243000 audit: BPF prog-id=236 op=UNLOAD Dec 16 12:58:51.243000 audit[4921]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4909 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:51.243000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234303136366663633633663432663666636364363066663231386564 Dec 16 12:58:51.243000 audit: BPF prog-id=238 op=LOAD Dec 16 12:58:51.243000 audit[4921]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=4909 pid=4921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:51.243000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234303136366663633633663432663666636364363066663231386564 Dec 16 12:58:51.258936 systemd-resolved[1370]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:58:51.292485 containerd[1605]: time="2025-12-16T12:58:51.292439568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-r5stm,Uid:e710a919-c171-452f-a8e0-220cab9661a8,Namespace:calico-system,Attempt:0,} returns sandbox id \"b40166fcc63f42f6fccd60ff218ed21ec1b22933870751b75ff2727877d05651\"" Dec 16 12:58:51.420724 containerd[1605]: time="2025-12-16T12:58:51.420575076Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:58:51.520613 containerd[1605]: time="2025-12-16T12:58:51.520532248Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:58:51.520613 containerd[1605]: time="2025-12-16T12:58:51.520589014Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:58:51.520920 kubelet[2806]: E1216 12:58:51.520866 2806 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:58:51.520978 kubelet[2806]: E1216 12:58:51.520932 2806 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:58:51.521232 kubelet[2806]: E1216 12:58:51.521151 2806 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j9mcc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-77fbcbddcb-x6lxz_calico-system(1c71642e-2c37-4a5d-aec0-8a0d6c89217c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:58:51.521388 containerd[1605]: time="2025-12-16T12:58:51.521275713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:58:51.522436 kubelet[2806]: E1216 12:58:51.522402 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77fbcbddcb-x6lxz" podUID="1c71642e-2c37-4a5d-aec0-8a0d6c89217c" Dec 16 12:58:51.737999 systemd-networkd[1515]: cali3c47d9a8439: Gained IPv6LL Dec 16 12:58:51.796689 kubelet[2806]: E1216 12:58:51.796626 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:51.797403 containerd[1605]: time="2025-12-16T12:58:51.797302879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wwmpg,Uid:f185fc0f-162e-4772-849a-712bab097239,Namespace:kube-system,Attempt:0,}" Dec 16 12:58:51.797776 containerd[1605]: time="2025-12-16T12:58:51.797746691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7767f7c484-w77xw,Uid:e4c04278-05a3-4964-9b08-f5b05bcddf6d,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:58:51.854777 containerd[1605]: time="2025-12-16T12:58:51.854718966Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:58:51.857683 containerd[1605]: time="2025-12-16T12:58:51.857630489Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:58:51.858253 kubelet[2806]: E1216 12:58:51.858195 2806 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:58:51.858315 kubelet[2806]: E1216 12:58:51.858263 2806 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:58:51.858630 kubelet[2806]: E1216 12:58:51.858573 2806 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2wv5p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7767f7c484-gsgtz_calico-apiserver(d3d0f018-dbb1-4af4-8317-c55456bbf69e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:58:51.860101 kubelet[2806]: E1216 12:58:51.860040 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7767f7c484-gsgtz" podUID="d3d0f018-dbb1-4af4-8317-c55456bbf69e" Dec 16 12:58:51.868881 containerd[1605]: time="2025-12-16T12:58:51.857925012Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:58:51.869083 containerd[1605]: time="2025-12-16T12:58:51.858587274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:58:51.914928 systemd-networkd[1515]: cali6ec95e8b67b: Link UP Dec 16 12:58:51.916020 systemd-networkd[1515]: cali6ec95e8b67b: Gained carrier Dec 16 12:58:51.930887 containerd[1605]: 2025-12-16 12:58:51.840 [INFO][4948] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--wwmpg-eth0 coredns-674b8bbfcf- kube-system f185fc0f-162e-4772-849a-712bab097239 871 0 2025-12-16 12:58:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-wwmpg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6ec95e8b67b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6ae2b1fa2bb4398936976fe6b0b407e405fa617b93cf88e227dc82f8ca13980d" Namespace="kube-system" Pod="coredns-674b8bbfcf-wwmpg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wwmpg-" Dec 16 12:58:51.930887 containerd[1605]: 2025-12-16 12:58:51.840 [INFO][4948] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6ae2b1fa2bb4398936976fe6b0b407e405fa617b93cf88e227dc82f8ca13980d" Namespace="kube-system" Pod="coredns-674b8bbfcf-wwmpg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wwmpg-eth0" Dec 16 12:58:51.930887 containerd[1605]: 2025-12-16 12:58:51.875 [INFO][4977] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ae2b1fa2bb4398936976fe6b0b407e405fa617b93cf88e227dc82f8ca13980d" HandleID="k8s-pod-network.6ae2b1fa2bb4398936976fe6b0b407e405fa617b93cf88e227dc82f8ca13980d" Workload="localhost-k8s-coredns--674b8bbfcf--wwmpg-eth0" Dec 16 12:58:51.930887 containerd[1605]: 2025-12-16 12:58:51.876 [INFO][4977] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6ae2b1fa2bb4398936976fe6b0b407e405fa617b93cf88e227dc82f8ca13980d" HandleID="k8s-pod-network.6ae2b1fa2bb4398936976fe6b0b407e405fa617b93cf88e227dc82f8ca13980d" Workload="localhost-k8s-coredns--674b8bbfcf--wwmpg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df5c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-wwmpg", "timestamp":"2025-12-16 12:58:51.875904329 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:58:51.930887 containerd[1605]: 2025-12-16 12:58:51.876 [INFO][4977] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:58:51.930887 containerd[1605]: 2025-12-16 12:58:51.876 [INFO][4977] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:58:51.930887 containerd[1605]: 2025-12-16 12:58:51.876 [INFO][4977] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:58:51.930887 containerd[1605]: 2025-12-16 12:58:51.882 [INFO][4977] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6ae2b1fa2bb4398936976fe6b0b407e405fa617b93cf88e227dc82f8ca13980d" host="localhost" Dec 16 12:58:51.930887 containerd[1605]: 2025-12-16 12:58:51.888 [INFO][4977] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:58:51.930887 containerd[1605]: 2025-12-16 12:58:51.891 [INFO][4977] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:58:51.930887 containerd[1605]: 2025-12-16 12:58:51.893 [INFO][4977] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:58:51.930887 containerd[1605]: 2025-12-16 12:58:51.895 [INFO][4977] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:58:51.930887 containerd[1605]: 2025-12-16 12:58:51.895 [INFO][4977] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6ae2b1fa2bb4398936976fe6b0b407e405fa617b93cf88e227dc82f8ca13980d" host="localhost" Dec 16 12:58:51.930887 containerd[1605]: 2025-12-16 12:58:51.896 [INFO][4977] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6ae2b1fa2bb4398936976fe6b0b407e405fa617b93cf88e227dc82f8ca13980d Dec 16 12:58:51.930887 containerd[1605]: 2025-12-16 12:58:51.901 [INFO][4977] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6ae2b1fa2bb4398936976fe6b0b407e405fa617b93cf88e227dc82f8ca13980d" host="localhost" Dec 16 12:58:51.930887 containerd[1605]: 2025-12-16 12:58:51.907 [INFO][4977] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.6ae2b1fa2bb4398936976fe6b0b407e405fa617b93cf88e227dc82f8ca13980d" host="localhost" Dec 16 12:58:51.930887 containerd[1605]: 2025-12-16 12:58:51.907 [INFO][4977] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.6ae2b1fa2bb4398936976fe6b0b407e405fa617b93cf88e227dc82f8ca13980d" host="localhost" Dec 16 12:58:51.930887 containerd[1605]: 2025-12-16 12:58:51.907 [INFO][4977] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:58:51.930887 containerd[1605]: 2025-12-16 12:58:51.907 [INFO][4977] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="6ae2b1fa2bb4398936976fe6b0b407e405fa617b93cf88e227dc82f8ca13980d" HandleID="k8s-pod-network.6ae2b1fa2bb4398936976fe6b0b407e405fa617b93cf88e227dc82f8ca13980d" Workload="localhost-k8s-coredns--674b8bbfcf--wwmpg-eth0" Dec 16 12:58:51.931624 containerd[1605]: 2025-12-16 12:58:51.911 [INFO][4948] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6ae2b1fa2bb4398936976fe6b0b407e405fa617b93cf88e227dc82f8ca13980d" Namespace="kube-system" Pod="coredns-674b8bbfcf-wwmpg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wwmpg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--wwmpg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f185fc0f-162e-4772-849a-712bab097239", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 58, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-wwmpg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6ec95e8b67b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:58:51.931624 containerd[1605]: 2025-12-16 12:58:51.911 [INFO][4948] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="6ae2b1fa2bb4398936976fe6b0b407e405fa617b93cf88e227dc82f8ca13980d" Namespace="kube-system" Pod="coredns-674b8bbfcf-wwmpg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wwmpg-eth0" Dec 16 12:58:51.931624 containerd[1605]: 2025-12-16 12:58:51.911 [INFO][4948] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6ec95e8b67b ContainerID="6ae2b1fa2bb4398936976fe6b0b407e405fa617b93cf88e227dc82f8ca13980d" Namespace="kube-system" Pod="coredns-674b8bbfcf-wwmpg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wwmpg-eth0" Dec 16 12:58:51.931624 containerd[1605]: 2025-12-16 12:58:51.916 [INFO][4948] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ae2b1fa2bb4398936976fe6b0b407e405fa617b93cf88e227dc82f8ca13980d" Namespace="kube-system" Pod="coredns-674b8bbfcf-wwmpg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wwmpg-eth0" Dec 16 12:58:51.931624 containerd[1605]: 2025-12-16 12:58:51.917 [INFO][4948] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6ae2b1fa2bb4398936976fe6b0b407e405fa617b93cf88e227dc82f8ca13980d" Namespace="kube-system" Pod="coredns-674b8bbfcf-wwmpg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wwmpg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--wwmpg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f185fc0f-162e-4772-849a-712bab097239", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 58, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6ae2b1fa2bb4398936976fe6b0b407e405fa617b93cf88e227dc82f8ca13980d", Pod:"coredns-674b8bbfcf-wwmpg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6ec95e8b67b", MAC:"62:21:74:58:33:d1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:58:51.931624 containerd[1605]: 2025-12-16 12:58:51.926 [INFO][4948] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6ae2b1fa2bb4398936976fe6b0b407e405fa617b93cf88e227dc82f8ca13980d" Namespace="kube-system" Pod="coredns-674b8bbfcf-wwmpg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--wwmpg-eth0" Dec 16 12:58:51.947000 audit[5003]: NETFILTER_CFG table=filter:137 family=2 entries=62 op=nft_register_chain pid=5003 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:58:51.947000 audit[5003]: SYSCALL arch=c000003e syscall=46 success=yes exit=27948 a0=3 a1=7ffc5251d370 a2=0 a3=7ffc5251d35c items=0 ppid=4329 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:51.947000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:58:51.959493 containerd[1605]: time="2025-12-16T12:58:51.958946682Z" level=info msg="connecting to shim 6ae2b1fa2bb4398936976fe6b0b407e405fa617b93cf88e227dc82f8ca13980d" address="unix:///run/containerd/s/f5d65df1ff56a0b4062881f2f2e951cf006b780e3a4649024710675f094934b4" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:58:51.991094 systemd[1]: Started cri-containerd-6ae2b1fa2bb4398936976fe6b0b407e405fa617b93cf88e227dc82f8ca13980d.scope - libcontainer container 6ae2b1fa2bb4398936976fe6b0b407e405fa617b93cf88e227dc82f8ca13980d. Dec 16 12:58:52.009000 audit: BPF prog-id=239 op=LOAD Dec 16 12:58:52.009000 audit: BPF prog-id=240 op=LOAD Dec 16 12:58:52.009000 audit[5024]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5013 pid=5024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:52.009000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661653262316661326262343339383933363937366665366230623430 Dec 16 12:58:52.010000 audit: BPF prog-id=240 op=UNLOAD Dec 16 12:58:52.010000 audit[5024]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5013 pid=5024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:52.010000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661653262316661326262343339383933363937366665366230623430 Dec 16 12:58:52.011000 audit: BPF prog-id=241 op=LOAD Dec 16 12:58:52.011000 audit[5024]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5013 pid=5024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:52.011000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661653262316661326262343339383933363937366665366230623430 Dec 16 12:58:52.011000 audit: BPF prog-id=242 op=LOAD Dec 16 12:58:52.011000 audit[5024]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5013 pid=5024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:52.011000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661653262316661326262343339383933363937366665366230623430 Dec 16 12:58:52.011000 audit: BPF prog-id=242 op=UNLOAD Dec 16 12:58:52.011000 audit[5024]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5013 pid=5024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:52.011000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661653262316661326262343339383933363937366665366230623430 Dec 16 12:58:52.012000 audit: BPF prog-id=241 op=UNLOAD Dec 16 12:58:52.012000 audit[5024]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5013 pid=5024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:52.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661653262316661326262343339383933363937366665366230623430 Dec 16 12:58:52.012000 audit: BPF prog-id=243 op=LOAD Dec 16 12:58:52.012000 audit[5024]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5013 pid=5024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:52.012000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661653262316661326262343339383933363937366665366230623430 Dec 16 12:58:52.014486 systemd-resolved[1370]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:58:52.020016 systemd-networkd[1515]: calic5d9af821bf: Link UP Dec 16 12:58:52.021129 systemd-networkd[1515]: calic5d9af821bf: Gained carrier Dec 16 12:58:52.037317 containerd[1605]: 2025-12-16 12:58:51.840 [INFO][4950] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7767f7c484--w77xw-eth0 calico-apiserver-7767f7c484- calico-apiserver e4c04278-05a3-4964-9b08-f5b05bcddf6d 875 0 2025-12-16 12:58:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7767f7c484 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7767f7c484-w77xw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic5d9af821bf [] [] }} ContainerID="1f92fffa750ed56691ca3b1081665b52675b204b2e8838758bcfefdc6a3aeab5" Namespace="calico-apiserver" Pod="calico-apiserver-7767f7c484-w77xw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7767f7c484--w77xw-" Dec 16 12:58:52.037317 containerd[1605]: 2025-12-16 12:58:51.840 [INFO][4950] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1f92fffa750ed56691ca3b1081665b52675b204b2e8838758bcfefdc6a3aeab5" Namespace="calico-apiserver" Pod="calico-apiserver-7767f7c484-w77xw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7767f7c484--w77xw-eth0" Dec 16 12:58:52.037317 containerd[1605]: 2025-12-16 12:58:51.881 [INFO][4979] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f92fffa750ed56691ca3b1081665b52675b204b2e8838758bcfefdc6a3aeab5" HandleID="k8s-pod-network.1f92fffa750ed56691ca3b1081665b52675b204b2e8838758bcfefdc6a3aeab5" Workload="localhost-k8s-calico--apiserver--7767f7c484--w77xw-eth0" Dec 16 12:58:52.037317 containerd[1605]: 2025-12-16 12:58:51.882 [INFO][4979] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1f92fffa750ed56691ca3b1081665b52675b204b2e8838758bcfefdc6a3aeab5" HandleID="k8s-pod-network.1f92fffa750ed56691ca3b1081665b52675b204b2e8838758bcfefdc6a3aeab5" Workload="localhost-k8s-calico--apiserver--7767f7c484--w77xw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e440), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7767f7c484-w77xw", "timestamp":"2025-12-16 12:58:51.881618801 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:58:52.037317 containerd[1605]: 2025-12-16 12:58:51.882 [INFO][4979] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:58:52.037317 containerd[1605]: 2025-12-16 12:58:51.908 [INFO][4979] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:58:52.037317 containerd[1605]: 2025-12-16 12:58:51.908 [INFO][4979] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 12:58:52.037317 containerd[1605]: 2025-12-16 12:58:51.982 [INFO][4979] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1f92fffa750ed56691ca3b1081665b52675b204b2e8838758bcfefdc6a3aeab5" host="localhost" Dec 16 12:58:52.037317 containerd[1605]: 2025-12-16 12:58:51.988 [INFO][4979] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 12:58:52.037317 containerd[1605]: 2025-12-16 12:58:51.993 [INFO][4979] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 12:58:52.037317 containerd[1605]: 2025-12-16 12:58:51.996 [INFO][4979] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 12:58:52.037317 containerd[1605]: 2025-12-16 12:58:51.999 [INFO][4979] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 12:58:52.037317 containerd[1605]: 2025-12-16 12:58:51.999 [INFO][4979] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1f92fffa750ed56691ca3b1081665b52675b204b2e8838758bcfefdc6a3aeab5" host="localhost" Dec 16 12:58:52.037317 containerd[1605]: 2025-12-16 12:58:52.000 [INFO][4979] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1f92fffa750ed56691ca3b1081665b52675b204b2e8838758bcfefdc6a3aeab5 Dec 16 12:58:52.037317 containerd[1605]: 2025-12-16 12:58:52.004 [INFO][4979] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1f92fffa750ed56691ca3b1081665b52675b204b2e8838758bcfefdc6a3aeab5" host="localhost" Dec 16 12:58:52.037317 containerd[1605]: 2025-12-16 12:58:52.011 [INFO][4979] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.1f92fffa750ed56691ca3b1081665b52675b204b2e8838758bcfefdc6a3aeab5" host="localhost" Dec 16 12:58:52.037317 containerd[1605]: 2025-12-16 12:58:52.012 [INFO][4979] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.1f92fffa750ed56691ca3b1081665b52675b204b2e8838758bcfefdc6a3aeab5" host="localhost" Dec 16 12:58:52.037317 containerd[1605]: 2025-12-16 12:58:52.012 [INFO][4979] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:58:52.037317 containerd[1605]: 2025-12-16 12:58:52.012 [INFO][4979] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="1f92fffa750ed56691ca3b1081665b52675b204b2e8838758bcfefdc6a3aeab5" HandleID="k8s-pod-network.1f92fffa750ed56691ca3b1081665b52675b204b2e8838758bcfefdc6a3aeab5" Workload="localhost-k8s-calico--apiserver--7767f7c484--w77xw-eth0" Dec 16 12:58:52.037940 containerd[1605]: 2025-12-16 12:58:52.016 [INFO][4950] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1f92fffa750ed56691ca3b1081665b52675b204b2e8838758bcfefdc6a3aeab5" Namespace="calico-apiserver" Pod="calico-apiserver-7767f7c484-w77xw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7767f7c484--w77xw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7767f7c484--w77xw-eth0", GenerateName:"calico-apiserver-7767f7c484-", Namespace:"calico-apiserver", SelfLink:"", UID:"e4c04278-05a3-4964-9b08-f5b05bcddf6d", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 58, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7767f7c484", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7767f7c484-w77xw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic5d9af821bf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:58:52.037940 containerd[1605]: 2025-12-16 12:58:52.016 [INFO][4950] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="1f92fffa750ed56691ca3b1081665b52675b204b2e8838758bcfefdc6a3aeab5" Namespace="calico-apiserver" Pod="calico-apiserver-7767f7c484-w77xw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7767f7c484--w77xw-eth0" Dec 16 12:58:52.037940 containerd[1605]: 2025-12-16 12:58:52.016 [INFO][4950] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic5d9af821bf ContainerID="1f92fffa750ed56691ca3b1081665b52675b204b2e8838758bcfefdc6a3aeab5" Namespace="calico-apiserver" Pod="calico-apiserver-7767f7c484-w77xw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7767f7c484--w77xw-eth0" Dec 16 12:58:52.037940 containerd[1605]: 2025-12-16 12:58:52.021 [INFO][4950] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1f92fffa750ed56691ca3b1081665b52675b204b2e8838758bcfefdc6a3aeab5" Namespace="calico-apiserver" Pod="calico-apiserver-7767f7c484-w77xw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7767f7c484--w77xw-eth0" Dec 16 12:58:52.037940 containerd[1605]: 2025-12-16 12:58:52.022 [INFO][4950] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1f92fffa750ed56691ca3b1081665b52675b204b2e8838758bcfefdc6a3aeab5" Namespace="calico-apiserver" Pod="calico-apiserver-7767f7c484-w77xw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7767f7c484--w77xw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7767f7c484--w77xw-eth0", GenerateName:"calico-apiserver-7767f7c484-", Namespace:"calico-apiserver", SelfLink:"", UID:"e4c04278-05a3-4964-9b08-f5b05bcddf6d", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 58, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7767f7c484", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1f92fffa750ed56691ca3b1081665b52675b204b2e8838758bcfefdc6a3aeab5", Pod:"calico-apiserver-7767f7c484-w77xw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic5d9af821bf", MAC:"1a:b9:17:b0:8a:66", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:58:52.037940 containerd[1605]: 2025-12-16 12:58:52.031 [INFO][4950] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1f92fffa750ed56691ca3b1081665b52675b204b2e8838758bcfefdc6a3aeab5" Namespace="calico-apiserver" Pod="calico-apiserver-7767f7c484-w77xw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7767f7c484--w77xw-eth0" Dec 16 12:58:52.053744 containerd[1605]: time="2025-12-16T12:58:52.053687012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wwmpg,Uid:f185fc0f-162e-4772-849a-712bab097239,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ae2b1fa2bb4398936976fe6b0b407e405fa617b93cf88e227dc82f8ca13980d\"" Dec 16 12:58:52.054870 kubelet[2806]: E1216 12:58:52.054525 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:52.054000 audit[5058]: NETFILTER_CFG table=filter:138 family=2 entries=61 op=nft_register_chain pid=5058 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 12:58:52.054000 audit[5058]: SYSCALL arch=c000003e syscall=46 success=yes exit=29000 a0=3 a1=7ffeaf3ec570 a2=0 a3=7ffeaf3ec55c items=0 ppid=4329 pid=5058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:52.054000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 12:58:52.071634 containerd[1605]: time="2025-12-16T12:58:52.071574456Z" level=info msg="connecting to shim 1f92fffa750ed56691ca3b1081665b52675b204b2e8838758bcfefdc6a3aeab5" address="unix:///run/containerd/s/34faf2cb77436eae0a5a22cc729d61b6ff194fd10949a2cfbf7904de5384e9d8" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:58:52.073855 containerd[1605]: time="2025-12-16T12:58:52.073815160Z" level=info msg="CreateContainer within sandbox \"6ae2b1fa2bb4398936976fe6b0b407e405fa617b93cf88e227dc82f8ca13980d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:58:52.087142 containerd[1605]: time="2025-12-16T12:58:52.086608455Z" level=info msg="Container 0cb1714c3798882c847ab9cfdd7e8b38fb66e886151370476639eec069deba40: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:58:52.093843 containerd[1605]: time="2025-12-16T12:58:52.093778538Z" level=info msg="CreateContainer within sandbox \"6ae2b1fa2bb4398936976fe6b0b407e405fa617b93cf88e227dc82f8ca13980d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0cb1714c3798882c847ab9cfdd7e8b38fb66e886151370476639eec069deba40\"" Dec 16 12:58:52.094298 containerd[1605]: time="2025-12-16T12:58:52.094265351Z" level=info msg="StartContainer for \"0cb1714c3798882c847ab9cfdd7e8b38fb66e886151370476639eec069deba40\"" Dec 16 12:58:52.095056 containerd[1605]: time="2025-12-16T12:58:52.095027892Z" level=info msg="connecting to shim 0cb1714c3798882c847ab9cfdd7e8b38fb66e886151370476639eec069deba40" address="unix:///run/containerd/s/f5d65df1ff56a0b4062881f2f2e951cf006b780e3a4649024710675f094934b4" protocol=ttrpc version=3 Dec 16 12:58:52.104135 systemd[1]: Started cri-containerd-1f92fffa750ed56691ca3b1081665b52675b204b2e8838758bcfefdc6a3aeab5.scope - libcontainer container 1f92fffa750ed56691ca3b1081665b52675b204b2e8838758bcfefdc6a3aeab5. Dec 16 12:58:52.113570 systemd[1]: Started cri-containerd-0cb1714c3798882c847ab9cfdd7e8b38fb66e886151370476639eec069deba40.scope - libcontainer container 0cb1714c3798882c847ab9cfdd7e8b38fb66e886151370476639eec069deba40. Dec 16 12:58:52.119000 audit: BPF prog-id=244 op=LOAD Dec 16 12:58:52.119000 audit: BPF prog-id=245 op=LOAD Dec 16 12:58:52.119000 audit[5079]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=5068 pid=5079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:52.119000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166393266666661373530656435363639316361336231303831363635 Dec 16 12:58:52.119000 audit: BPF prog-id=245 op=UNLOAD Dec 16 12:58:52.119000 audit[5079]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5068 pid=5079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:52.119000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166393266666661373530656435363639316361336231303831363635 Dec 16 12:58:52.119000 audit: BPF prog-id=246 op=LOAD Dec 16 12:58:52.119000 audit[5079]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=5068 pid=5079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:52.119000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166393266666661373530656435363639316361336231303831363635 Dec 16 12:58:52.120000 audit: BPF prog-id=247 op=LOAD Dec 16 12:58:52.120000 audit[5079]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=5068 pid=5079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:52.120000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166393266666661373530656435363639316361336231303831363635 Dec 16 12:58:52.120000 audit: BPF prog-id=247 op=UNLOAD Dec 16 12:58:52.120000 audit[5079]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5068 pid=5079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:52.120000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166393266666661373530656435363639316361336231303831363635 Dec 16 12:58:52.120000 audit: BPF prog-id=246 op=UNLOAD Dec 16 12:58:52.120000 audit[5079]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5068 pid=5079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:52.120000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166393266666661373530656435363639316361336231303831363635 Dec 16 12:58:52.120000 audit: BPF prog-id=248 op=LOAD Dec 16 12:58:52.120000 audit[5079]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=5068 pid=5079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:52.120000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166393266666661373530656435363639316361336231303831363635 Dec 16 12:58:52.122289 systemd-resolved[1370]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 12:58:52.128000 audit: BPF prog-id=249 op=LOAD Dec 16 12:58:52.129000 audit: BPF prog-id=250 op=LOAD Dec 16 12:58:52.129000 audit[5090]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178238 a2=98 a3=0 items=0 ppid=5013 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:52.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063623137313463333739383838326338343761623963666464376538 Dec 16 12:58:52.129000 audit: BPF prog-id=250 op=UNLOAD Dec 16 12:58:52.129000 audit[5090]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5013 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:52.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063623137313463333739383838326338343761623963666464376538 Dec 16 12:58:52.129000 audit: BPF prog-id=251 op=LOAD Dec 16 12:58:52.129000 audit[5090]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000178488 a2=98 a3=0 items=0 ppid=5013 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:52.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063623137313463333739383838326338343761623963666464376538 Dec 16 12:58:52.129000 audit: BPF prog-id=252 op=LOAD Dec 16 12:58:52.129000 audit[5090]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000178218 a2=98 a3=0 items=0 ppid=5013 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:52.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063623137313463333739383838326338343761623963666464376538 Dec 16 12:58:52.129000 audit: BPF prog-id=252 op=UNLOAD Dec 16 12:58:52.129000 audit[5090]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5013 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:52.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063623137313463333739383838326338343761623963666464376538 Dec 16 12:58:52.129000 audit: BPF prog-id=251 op=UNLOAD Dec 16 12:58:52.129000 audit[5090]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5013 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:52.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063623137313463333739383838326338343761623963666464376538 Dec 16 12:58:52.129000 audit: BPF prog-id=253 op=LOAD Dec 16 12:58:52.129000 audit[5090]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001786e8 a2=98 a3=0 items=0 ppid=5013 pid=5090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:52.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063623137313463333739383838326338343761623963666464376538 Dec 16 12:58:52.151232 containerd[1605]: time="2025-12-16T12:58:52.151186006Z" level=info msg="StartContainer for \"0cb1714c3798882c847ab9cfdd7e8b38fb66e886151370476639eec069deba40\" returns successfully" Dec 16 12:58:52.159372 kubelet[2806]: E1216 12:58:52.159343 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:52.164989 kubelet[2806]: E1216 12:58:52.164484 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:52.173053 kubelet[2806]: E1216 12:58:52.172924 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7767f7c484-gsgtz" podUID="d3d0f018-dbb1-4af4-8317-c55456bbf69e" Dec 16 12:58:52.173053 kubelet[2806]: E1216 12:58:52.173010 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77fbcbddcb-x6lxz" podUID="1c71642e-2c37-4a5d-aec0-8a0d6c89217c" Dec 16 12:58:52.179143 containerd[1605]: time="2025-12-16T12:58:52.179077230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7767f7c484-w77xw,Uid:e4c04278-05a3-4964-9b08-f5b05bcddf6d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1f92fffa750ed56691ca3b1081665b52675b204b2e8838758bcfefdc6a3aeab5\"" Dec 16 12:58:52.189812 kubelet[2806]: I1216 12:58:52.189680 2806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-wwmpg" podStartSLOduration=44.189666779 podStartE2EDuration="44.189666779s" podCreationTimestamp="2025-12-16 12:58:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:58:52.188103937 +0000 UTC m=+48.481242421" watchObservedRunningTime="2025-12-16 12:58:52.189666779 +0000 UTC m=+48.482805263" Dec 16 12:58:52.199000 audit[5140]: NETFILTER_CFG table=filter:139 family=2 entries=14 op=nft_register_rule pid=5140 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:52.199000 audit[5140]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdf84140b0 a2=0 a3=7ffdf841409c items=0 ppid=2937 pid=5140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:52.199000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:52.206000 audit[5140]: NETFILTER_CFG table=nat:140 family=2 entries=44 op=nft_register_rule pid=5140 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:52.206000 audit[5140]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffdf84140b0 a2=0 a3=7ffdf841409c items=0 ppid=2937 pid=5140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:52.206000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:52.226182 containerd[1605]: time="2025-12-16T12:58:52.226110861Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:58:52.227295 containerd[1605]: time="2025-12-16T12:58:52.227200165Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:58:52.227364 containerd[1605]: time="2025-12-16T12:58:52.227246492Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:58:52.227584 kubelet[2806]: E1216 12:58:52.227538 2806 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:58:52.227706 kubelet[2806]: E1216 12:58:52.227588 2806 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:58:52.227947 kubelet[2806]: E1216 12:58:52.227807 2806 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vggkw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-r5stm_calico-system(e710a919-c171-452f-a8e0-220cab9661a8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:58:52.228212 containerd[1605]: time="2025-12-16T12:58:52.228182138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:58:52.229453 kubelet[2806]: E1216 12:58:52.229406 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r5stm" podUID="e710a919-c171-452f-a8e0-220cab9661a8" Dec 16 12:58:52.506994 systemd-networkd[1515]: calib83695efda4: Gained IPv6LL Dec 16 12:58:52.563474 containerd[1605]: time="2025-12-16T12:58:52.563402383Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:58:52.564643 containerd[1605]: time="2025-12-16T12:58:52.564596834Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:58:52.564643 containerd[1605]: time="2025-12-16T12:58:52.564631589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:58:52.564888 kubelet[2806]: E1216 12:58:52.564815 2806 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:58:52.564946 kubelet[2806]: E1216 12:58:52.564892 2806 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:58:52.567099 kubelet[2806]: E1216 12:58:52.567040 2806 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c9wsx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7767f7c484-w77xw_calico-apiserver(e4c04278-05a3-4964-9b08-f5b05bcddf6d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:58:52.571612 kubelet[2806]: E1216 12:58:52.571571 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7767f7c484-w77xw" podUID="e4c04278-05a3-4964-9b08-f5b05bcddf6d" Dec 16 12:58:52.826037 systemd-networkd[1515]: cali10983caa5d2: Gained IPv6LL Dec 16 12:58:52.954095 systemd-networkd[1515]: cali66e09184c2a: Gained IPv6LL Dec 16 12:58:53.177091 kubelet[2806]: E1216 12:58:53.176087 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:53.177585 kubelet[2806]: E1216 12:58:53.176885 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r5stm" podUID="e710a919-c171-452f-a8e0-220cab9661a8" Dec 16 12:58:53.177585 kubelet[2806]: E1216 12:58:53.176940 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7767f7c484-w77xw" podUID="e4c04278-05a3-4964-9b08-f5b05bcddf6d" Dec 16 12:58:53.177585 kubelet[2806]: E1216 12:58:53.176985 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7767f7c484-gsgtz" podUID="d3d0f018-dbb1-4af4-8317-c55456bbf69e" Dec 16 12:58:53.236000 audit[5148]: NETFILTER_CFG table=filter:141 family=2 entries=14 op=nft_register_rule pid=5148 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:53.239925 kernel: kauditd_printk_skb: 467 callbacks suppressed Dec 16 12:58:53.239979 kernel: audit: type=1325 audit(1765889933.236:746): table=filter:141 family=2 entries=14 op=nft_register_rule pid=5148 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:53.236000 audit[5148]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd3641e2c0 a2=0 a3=7ffd3641e2ac items=0 ppid=2937 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:53.236000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:53.253196 kernel: audit: type=1300 audit(1765889933.236:746): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd3641e2c0 a2=0 a3=7ffd3641e2ac items=0 ppid=2937 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:53.253301 kernel: audit: type=1327 audit(1765889933.236:746): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:53.244000 audit[5148]: NETFILTER_CFG table=nat:142 family=2 entries=20 op=nft_register_rule pid=5148 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:53.256401 kernel: audit: type=1325 audit(1765889933.244:747): table=nat:142 family=2 entries=20 op=nft_register_rule pid=5148 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:53.257403 kernel: audit: type=1300 audit(1765889933.244:747): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd3641e2c0 a2=0 a3=7ffd3641e2ac items=0 ppid=2937 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:53.244000 audit[5148]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd3641e2c0 a2=0 a3=7ffd3641e2ac items=0 ppid=2937 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:53.244000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:53.269871 kernel: audit: type=1327 audit(1765889933.244:747): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:53.338032 systemd-networkd[1515]: calic5d9af821bf: Gained IPv6LL Dec 16 12:58:53.786003 systemd-networkd[1515]: cali6ec95e8b67b: Gained IPv6LL Dec 16 12:58:54.173943 kubelet[2806]: E1216 12:58:54.173526 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:54.173943 kubelet[2806]: E1216 12:58:54.173706 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7767f7c484-w77xw" podUID="e4c04278-05a3-4964-9b08-f5b05bcddf6d" Dec 16 12:58:54.290000 audit[5152]: NETFILTER_CFG table=filter:143 family=2 entries=14 op=nft_register_rule pid=5152 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:54.290000 audit[5152]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff6fb6a680 a2=0 a3=7fff6fb6a66c items=0 ppid=2937 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:54.303031 kernel: audit: type=1325 audit(1765889934.290:748): table=filter:143 family=2 entries=14 op=nft_register_rule pid=5152 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:54.303095 kernel: audit: type=1300 audit(1765889934.290:748): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff6fb6a680 a2=0 a3=7fff6fb6a66c items=0 ppid=2937 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:54.290000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:54.306304 kernel: audit: type=1327 audit(1765889934.290:748): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:54.312000 audit[5152]: NETFILTER_CFG table=nat:144 family=2 entries=56 op=nft_register_chain pid=5152 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:54.312000 audit[5152]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fff6fb6a680 a2=0 a3=7fff6fb6a66c items=0 ppid=2937 pid=5152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:54.312000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:58:54.317943 kernel: audit: type=1325 audit(1765889934.312:749): table=nat:144 family=2 entries=56 op=nft_register_chain pid=5152 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:58:55.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.102:22-10.0.0.1:36842 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:58:55.094479 systemd[1]: Started sshd@9-10.0.0.102:22-10.0.0.1:36842.service - OpenSSH per-connection server daemon (10.0.0.1:36842). Dec 16 12:58:55.174000 audit[5155]: USER_ACCT pid=5155 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:58:55.175748 sshd[5155]: Accepted publickey for core from 10.0.0.1 port 36842 ssh2: RSA SHA256:Nb0q9jxD4EhyUGvlUh0tlGIiDz42DR960XQ2mSo6eQ4 Dec 16 12:58:55.175000 audit[5155]: CRED_ACQ pid=5155 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:58:55.176000 audit[5155]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff838ec8b0 a2=3 a3=0 items=0 ppid=1 pid=5155 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:55.176000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:58:55.177479 sshd-session[5155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:58:55.182503 systemd-logind[1584]: New session 10 of user core. Dec 16 12:58:55.194119 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 12:58:55.195000 audit[5155]: USER_START pid=5155 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:58:55.197000 audit[5158]: CRED_ACQ pid=5158 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:58:55.291450 sshd[5158]: Connection closed by 10.0.0.1 port 36842 Dec 16 12:58:55.291739 sshd-session[5155]: pam_unix(sshd:session): session closed for user core Dec 16 12:58:55.292000 audit[5155]: USER_END pid=5155 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:58:55.292000 audit[5155]: CRED_DISP pid=5155 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:58:55.300633 systemd[1]: sshd@9-10.0.0.102:22-10.0.0.1:36842.service: Deactivated successfully. Dec 16 12:58:55.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.102:22-10.0.0.1:36842 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:58:55.302633 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 12:58:55.303430 systemd-logind[1584]: Session 10 logged out. Waiting for processes to exit. Dec 16 12:58:55.306467 systemd[1]: Started sshd@10-10.0.0.102:22-10.0.0.1:36856.service - OpenSSH per-connection server daemon (10.0.0.1:36856). Dec 16 12:58:55.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.102:22-10.0.0.1:36856 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:58:55.307325 systemd-logind[1584]: Removed session 10. Dec 16 12:58:55.360000 audit[5172]: USER_ACCT pid=5172 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:58:55.361218 sshd[5172]: Accepted publickey for core from 10.0.0.1 port 36856 ssh2: RSA SHA256:Nb0q9jxD4EhyUGvlUh0tlGIiDz42DR960XQ2mSo6eQ4 Dec 16 12:58:55.361000 audit[5172]: CRED_ACQ pid=5172 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:58:55.361000 audit[5172]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdfe02f430 a2=3 a3=0 items=0 ppid=1 pid=5172 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:55.361000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:58:55.362611 sshd-session[5172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:58:55.367803 systemd-logind[1584]: New session 11 of user core. Dec 16 12:58:55.374986 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 12:58:55.376000 audit[5172]: USER_START pid=5172 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:58:55.378000 audit[5175]: CRED_ACQ pid=5175 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:58:55.654340 sshd[5175]: Connection closed by 10.0.0.1 port 36856 Dec 16 12:58:55.654562 sshd-session[5172]: pam_unix(sshd:session): session closed for user core Dec 16 12:58:55.656000 audit[5172]: USER_END pid=5172 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:58:55.656000 audit[5172]: CRED_DISP pid=5172 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:58:55.666788 systemd[1]: sshd@10-10.0.0.102:22-10.0.0.1:36856.service: Deactivated successfully. Dec 16 12:58:55.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.102:22-10.0.0.1:36856 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:58:55.670327 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 12:58:55.673093 systemd-logind[1584]: Session 11 logged out. Waiting for processes to exit. Dec 16 12:58:55.681725 systemd[1]: Started sshd@11-10.0.0.102:22-10.0.0.1:36860.service - OpenSSH per-connection server daemon (10.0.0.1:36860). Dec 16 12:58:55.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.102:22-10.0.0.1:36860 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:58:55.683801 systemd-logind[1584]: Removed session 11. Dec 16 12:58:55.739000 audit[5188]: USER_ACCT pid=5188 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:58:55.741013 sshd[5188]: Accepted publickey for core from 10.0.0.1 port 36860 ssh2: RSA SHA256:Nb0q9jxD4EhyUGvlUh0tlGIiDz42DR960XQ2mSo6eQ4 Dec 16 12:58:55.741000 audit[5188]: CRED_ACQ pid=5188 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:58:55.741000 audit[5188]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffefab7240 a2=3 a3=0 items=0 ppid=1 pid=5188 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:58:55.741000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:58:55.742772 sshd-session[5188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:58:55.748037 systemd-logind[1584]: New session 12 of user core. Dec 16 12:58:55.761074 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 12:58:55.762000 audit[5188]: USER_START pid=5188 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:58:55.765000 audit[5192]: CRED_ACQ pid=5192 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:58:55.845343 sshd[5192]: Connection closed by 10.0.0.1 port 36860 Dec 16 12:58:55.845674 sshd-session[5188]: pam_unix(sshd:session): session closed for user core Dec 16 12:58:55.846000 audit[5188]: USER_END pid=5188 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:58:55.846000 audit[5188]: CRED_DISP pid=5188 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:58:55.851168 systemd[1]: sshd@11-10.0.0.102:22-10.0.0.1:36860.service: Deactivated successfully. Dec 16 12:58:55.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.102:22-10.0.0.1:36860 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:58:55.853800 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 12:58:55.854682 systemd-logind[1584]: Session 12 logged out. Waiting for processes to exit. Dec 16 12:58:55.856548 systemd-logind[1584]: Removed session 12. Dec 16 12:58:56.038038 kubelet[2806]: I1216 12:58:56.037960 2806 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:58:56.039246 kubelet[2806]: E1216 12:58:56.039220 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:56.178023 kubelet[2806]: E1216 12:58:56.177932 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:58:59.801116 containerd[1605]: time="2025-12-16T12:58:59.801068650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:59:00.102909 containerd[1605]: time="2025-12-16T12:59:00.102679781Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:00.104158 containerd[1605]: time="2025-12-16T12:59:00.104117442Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:59:00.104237 containerd[1605]: time="2025-12-16T12:59:00.104175965Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:59:00.104412 kubelet[2806]: E1216 12:59:00.104353 2806 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:59:00.104794 kubelet[2806]: E1216 12:59:00.104416 2806 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:59:00.104794 kubelet[2806]: E1216 12:59:00.104551 2806 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e22e1d47bcea42178d38e488f88370a3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8kgm8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c9b9b98c4-zxd2x_calico-system(0412d3e0-d5c8-47ca-9e38-144ba2ec1a92): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:00.106447 containerd[1605]: time="2025-12-16T12:59:00.106421955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:59:00.467761 containerd[1605]: time="2025-12-16T12:59:00.467593939Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:00.468951 containerd[1605]: time="2025-12-16T12:59:00.468894136Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:59:00.469090 containerd[1605]: time="2025-12-16T12:59:00.468966666Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:59:00.469158 kubelet[2806]: E1216 12:59:00.469117 2806 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:59:00.469251 kubelet[2806]: E1216 12:59:00.469170 2806 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:59:00.469340 kubelet[2806]: E1216 12:59:00.469292 2806 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8kgm8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c9b9b98c4-zxd2x_calico-system(0412d3e0-d5c8-47ca-9e38-144ba2ec1a92): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:00.470541 kubelet[2806]: E1216 12:59:00.470495 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c9b9b98c4-zxd2x" podUID="0412d3e0-d5c8-47ca-9e38-144ba2ec1a92" Dec 16 12:59:00.796925 containerd[1605]: time="2025-12-16T12:59:00.796734518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:59:00.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.102:22-10.0.0.1:36288 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:00.861756 systemd[1]: Started sshd@12-10.0.0.102:22-10.0.0.1:36288.service - OpenSSH per-connection server daemon (10.0.0.1:36288). Dec 16 12:59:00.863258 kernel: kauditd_printk_skb: 35 callbacks suppressed Dec 16 12:59:00.863327 kernel: audit: type=1130 audit(1765889940.861:777): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.102:22-10.0.0.1:36288 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:00.922000 audit[5265]: USER_ACCT pid=5265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:00.926244 sshd-session[5265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:00.926811 sshd[5265]: Accepted publickey for core from 10.0.0.1 port 36288 ssh2: RSA SHA256:Nb0q9jxD4EhyUGvlUh0tlGIiDz42DR960XQ2mSo6eQ4 Dec 16 12:59:00.924000 audit[5265]: CRED_ACQ pid=5265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:00.934663 kernel: audit: type=1101 audit(1765889940.922:778): pid=5265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:00.934718 kernel: audit: type=1103 audit(1765889940.924:779): pid=5265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:00.934864 kernel: audit: type=1006 audit(1765889940.924:780): pid=5265 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 16 12:59:00.934882 systemd-logind[1584]: New session 13 of user core. Dec 16 12:59:00.937865 kernel: audit: type=1300 audit(1765889940.924:780): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff9e26730 a2=3 a3=0 items=0 ppid=1 pid=5265 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:00.924000 audit[5265]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff9e26730 a2=3 a3=0 items=0 ppid=1 pid=5265 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:00.924000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:00.945405 kernel: audit: type=1327 audit(1765889940.924:780): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:00.945999 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 12:59:00.947000 audit[5265]: USER_START pid=5265 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:00.949000 audit[5268]: CRED_ACQ pid=5268 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:00.959805 kernel: audit: type=1105 audit(1765889940.947:781): pid=5265 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:00.959890 kernel: audit: type=1103 audit(1765889940.949:782): pid=5268 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:01.016703 sshd[5268]: Connection closed by 10.0.0.1 port 36288 Dec 16 12:59:01.017020 sshd-session[5265]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:01.017000 audit[5265]: USER_END pid=5265 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:01.022189 systemd[1]: sshd@12-10.0.0.102:22-10.0.0.1:36288.service: Deactivated successfully. Dec 16 12:59:01.017000 audit[5265]: CRED_DISP pid=5265 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:01.025229 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 12:59:01.026292 systemd-logind[1584]: Session 13 logged out. Waiting for processes to exit. Dec 16 12:59:01.027575 systemd-logind[1584]: Removed session 13. Dec 16 12:59:01.029966 kernel: audit: type=1106 audit(1765889941.017:783): pid=5265 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:01.030012 kernel: audit: type=1104 audit(1765889941.017:784): pid=5265 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:01.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.102:22-10.0.0.1:36288 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:01.139386 containerd[1605]: time="2025-12-16T12:59:01.139244294Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:01.351661 containerd[1605]: time="2025-12-16T12:59:01.351563362Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:59:01.351880 containerd[1605]: time="2025-12-16T12:59:01.351619560Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:59:01.351907 kubelet[2806]: E1216 12:59:01.351875 2806 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:59:01.352195 kubelet[2806]: E1216 12:59:01.351917 2806 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:59:01.352195 kubelet[2806]: E1216 12:59:01.352023 2806 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2zw4v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4lx5v_calico-system(4e3b9dec-c7ec-4533-9b5f-135d8bcc981d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:01.353750 containerd[1605]: time="2025-12-16T12:59:01.353720748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:59:01.876195 containerd[1605]: time="2025-12-16T12:59:01.876122276Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:01.877403 containerd[1605]: time="2025-12-16T12:59:01.877354761Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:59:01.877540 containerd[1605]: time="2025-12-16T12:59:01.877451487Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:59:01.877680 kubelet[2806]: E1216 12:59:01.877620 2806 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:59:01.877731 kubelet[2806]: E1216 12:59:01.877686 2806 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:59:01.877933 kubelet[2806]: E1216 12:59:01.877873 2806 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2zw4v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4lx5v_calico-system(4e3b9dec-c7ec-4533-9b5f-135d8bcc981d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:01.879079 kubelet[2806]: E1216 12:59:01.879024 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4lx5v" podUID="4e3b9dec-c7ec-4533-9b5f-135d8bcc981d" Dec 16 12:59:02.798186 containerd[1605]: time="2025-12-16T12:59:02.798133716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:59:03.102691 containerd[1605]: time="2025-12-16T12:59:03.102512240Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:03.103952 containerd[1605]: time="2025-12-16T12:59:03.103905950Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:59:03.103952 containerd[1605]: time="2025-12-16T12:59:03.103940156Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:59:03.104191 kubelet[2806]: E1216 12:59:03.104133 2806 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:59:03.104191 kubelet[2806]: E1216 12:59:03.104186 2806 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:59:03.104627 kubelet[2806]: E1216 12:59:03.104342 2806 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rv5ck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c7fb6bf4b-pj7kl_calico-apiserver(ae64ff47-24ee-417f-a174-8a680294cf45): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:03.105537 kubelet[2806]: E1216 12:59:03.105509 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c7fb6bf4b-pj7kl" podUID="ae64ff47-24ee-417f-a174-8a680294cf45" Dec 16 12:59:05.798171 containerd[1605]: time="2025-12-16T12:59:05.797957169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:59:06.031518 systemd[1]: Started sshd@13-10.0.0.102:22-10.0.0.1:36298.service - OpenSSH per-connection server daemon (10.0.0.1:36298). Dec 16 12:59:06.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.102:22-10.0.0.1:36298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:06.036887 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:59:06.037014 kernel: audit: type=1130 audit(1765889946.030:786): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.102:22-10.0.0.1:36298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:06.090000 audit[5290]: USER_ACCT pid=5290 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:06.092003 sshd[5290]: Accepted publickey for core from 10.0.0.1 port 36298 ssh2: RSA SHA256:Nb0q9jxD4EhyUGvlUh0tlGIiDz42DR960XQ2mSo6eQ4 Dec 16 12:59:06.093543 sshd-session[5290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:06.091000 audit[5290]: CRED_ACQ pid=5290 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:06.098630 systemd-logind[1584]: New session 14 of user core. Dec 16 12:59:06.103204 kernel: audit: type=1101 audit(1765889946.090:787): pid=5290 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:06.103311 kernel: audit: type=1103 audit(1765889946.091:788): pid=5290 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:06.103340 kernel: audit: type=1006 audit(1765889946.091:789): pid=5290 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 16 12:59:06.106433 kernel: audit: type=1300 audit(1765889946.091:789): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc33be7ab0 a2=3 a3=0 items=0 ppid=1 pid=5290 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:06.091000 audit[5290]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc33be7ab0 a2=3 a3=0 items=0 ppid=1 pid=5290 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:06.091000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:06.114750 kernel: audit: type=1327 audit(1765889946.091:789): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:06.120048 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 12:59:06.120000 audit[5290]: USER_START pid=5290 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:06.122000 audit[5293]: CRED_ACQ pid=5293 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:06.134127 kernel: audit: type=1105 audit(1765889946.120:790): pid=5290 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:06.134207 kernel: audit: type=1103 audit(1765889946.122:791): pid=5293 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:06.139210 containerd[1605]: time="2025-12-16T12:59:06.139017815Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:06.140299 containerd[1605]: time="2025-12-16T12:59:06.140260522Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:59:06.140392 containerd[1605]: time="2025-12-16T12:59:06.140320486Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:59:06.140507 kubelet[2806]: E1216 12:59:06.140461 2806 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:59:06.140997 kubelet[2806]: E1216 12:59:06.140515 2806 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:59:06.140997 kubelet[2806]: E1216 12:59:06.140640 2806 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2wv5p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7767f7c484-gsgtz_calico-apiserver(d3d0f018-dbb1-4af4-8317-c55456bbf69e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:06.141969 kubelet[2806]: E1216 12:59:06.141798 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7767f7c484-gsgtz" podUID="d3d0f018-dbb1-4af4-8317-c55456bbf69e" Dec 16 12:59:06.195754 sshd[5293]: Connection closed by 10.0.0.1 port 36298 Dec 16 12:59:06.197047 sshd-session[5290]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:06.196000 audit[5290]: USER_END pid=5290 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:06.202260 systemd[1]: sshd@13-10.0.0.102:22-10.0.0.1:36298.service: Deactivated successfully. Dec 16 12:59:06.204797 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 12:59:06.196000 audit[5290]: CRED_DISP pid=5290 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:06.205948 systemd-logind[1584]: Session 14 logged out. Waiting for processes to exit. Dec 16 12:59:06.207166 systemd-logind[1584]: Removed session 14. Dec 16 12:59:06.210624 kernel: audit: type=1106 audit(1765889946.196:792): pid=5290 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:06.210679 kernel: audit: type=1104 audit(1765889946.196:793): pid=5290 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:06.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.102:22-10.0.0.1:36298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:06.797497 containerd[1605]: time="2025-12-16T12:59:06.797136113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:59:07.206068 containerd[1605]: time="2025-12-16T12:59:07.205924553Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:07.207446 containerd[1605]: time="2025-12-16T12:59:07.207415775Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:59:07.207506 containerd[1605]: time="2025-12-16T12:59:07.207465480Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:59:07.207606 kubelet[2806]: E1216 12:59:07.207551 2806 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:59:07.207606 kubelet[2806]: E1216 12:59:07.207589 2806 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:59:07.207919 kubelet[2806]: E1216 12:59:07.207724 2806 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j9mcc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-77fbcbddcb-x6lxz_calico-system(1c71642e-2c37-4a5d-aec0-8a0d6c89217c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:07.208950 kubelet[2806]: E1216 12:59:07.208912 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77fbcbddcb-x6lxz" podUID="1c71642e-2c37-4a5d-aec0-8a0d6c89217c" Dec 16 12:59:07.797936 containerd[1605]: time="2025-12-16T12:59:07.797873074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:59:08.131880 containerd[1605]: time="2025-12-16T12:59:08.131659256Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:08.133066 containerd[1605]: time="2025-12-16T12:59:08.133018363Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:59:08.133184 containerd[1605]: time="2025-12-16T12:59:08.133104428Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:59:08.133282 kubelet[2806]: E1216 12:59:08.133240 2806 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:59:08.133325 kubelet[2806]: E1216 12:59:08.133298 2806 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:59:08.133504 kubelet[2806]: E1216 12:59:08.133451 2806 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vggkw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-r5stm_calico-system(e710a919-c171-452f-a8e0-220cab9661a8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:08.134705 kubelet[2806]: E1216 12:59:08.134657 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r5stm" podUID="e710a919-c171-452f-a8e0-220cab9661a8" Dec 16 12:59:08.797624 containerd[1605]: time="2025-12-16T12:59:08.797338758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:59:09.390110 containerd[1605]: time="2025-12-16T12:59:09.390042965Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:09.519709 containerd[1605]: time="2025-12-16T12:59:09.519633012Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:59:09.519709 containerd[1605]: time="2025-12-16T12:59:09.519665063Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:59:09.519992 kubelet[2806]: E1216 12:59:09.519930 2806 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:59:09.520344 kubelet[2806]: E1216 12:59:09.519994 2806 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:59:09.520344 kubelet[2806]: E1216 12:59:09.520117 2806 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c9wsx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7767f7c484-w77xw_calico-apiserver(e4c04278-05a3-4964-9b08-f5b05bcddf6d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:09.521432 kubelet[2806]: E1216 12:59:09.521366 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7767f7c484-w77xw" podUID="e4c04278-05a3-4964-9b08-f5b05bcddf6d" Dec 16 12:59:11.214679 systemd[1]: Started sshd@14-10.0.0.102:22-10.0.0.1:46288.service - OpenSSH per-connection server daemon (10.0.0.1:46288). Dec 16 12:59:11.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.102:22-10.0.0.1:46288 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:11.216377 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:59:11.216432 kernel: audit: type=1130 audit(1765889951.213:795): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.102:22-10.0.0.1:46288 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:11.276000 audit[5315]: USER_ACCT pid=5315 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:11.277762 sshd[5315]: Accepted publickey for core from 10.0.0.1 port 46288 ssh2: RSA SHA256:Nb0q9jxD4EhyUGvlUh0tlGIiDz42DR960XQ2mSo6eQ4 Dec 16 12:59:11.279546 sshd-session[5315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:11.278000 audit[5315]: CRED_ACQ pid=5315 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:11.284764 systemd-logind[1584]: New session 15 of user core. Dec 16 12:59:11.289570 kernel: audit: type=1101 audit(1765889951.276:796): pid=5315 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:11.289640 kernel: audit: type=1103 audit(1765889951.278:797): pid=5315 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:11.289671 kernel: audit: type=1006 audit(1765889951.278:798): pid=5315 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 16 12:59:11.278000 audit[5315]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe6e257d70 a2=3 a3=0 items=0 ppid=1 pid=5315 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:11.297430 kernel: audit: type=1300 audit(1765889951.278:798): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe6e257d70 a2=3 a3=0 items=0 ppid=1 pid=5315 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:11.297473 kernel: audit: type=1327 audit(1765889951.278:798): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:11.278000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:11.305037 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 12:59:11.307000 audit[5315]: USER_START pid=5315 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:11.313877 kernel: audit: type=1105 audit(1765889951.307:799): pid=5315 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:11.313991 kernel: audit: type=1103 audit(1765889951.309:800): pid=5318 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:11.309000 audit[5318]: CRED_ACQ pid=5318 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:11.378577 sshd[5318]: Connection closed by 10.0.0.1 port 46288 Dec 16 12:59:11.378897 sshd-session[5315]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:11.379000 audit[5315]: USER_END pid=5315 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:11.384371 systemd[1]: sshd@14-10.0.0.102:22-10.0.0.1:46288.service: Deactivated successfully. Dec 16 12:59:11.379000 audit[5315]: CRED_DISP pid=5315 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:11.387296 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 12:59:11.390623 systemd-logind[1584]: Session 15 logged out. Waiting for processes to exit. Dec 16 12:59:11.390738 kernel: audit: type=1106 audit(1765889951.379:801): pid=5315 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:11.390779 kernel: audit: type=1104 audit(1765889951.379:802): pid=5315 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:11.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.102:22-10.0.0.1:46288 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:11.391667 systemd-logind[1584]: Removed session 15. Dec 16 12:59:11.797459 kubelet[2806]: E1216 12:59:11.797394 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c9b9b98c4-zxd2x" podUID="0412d3e0-d5c8-47ca-9e38-144ba2ec1a92" Dec 16 12:59:13.798699 kubelet[2806]: E1216 12:59:13.798625 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4lx5v" podUID="4e3b9dec-c7ec-4533-9b5f-135d8bcc981d" Dec 16 12:59:14.796478 kubelet[2806]: E1216 12:59:14.796424 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:59:16.400887 systemd[1]: Started sshd@15-10.0.0.102:22-10.0.0.1:46296.service - OpenSSH per-connection server daemon (10.0.0.1:46296). Dec 16 12:59:16.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.102:22-10.0.0.1:46296 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:16.402181 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:59:16.402300 kernel: audit: type=1130 audit(1765889956.400:804): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.102:22-10.0.0.1:46296 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:16.457000 audit[5334]: USER_ACCT pid=5334 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:16.458283 sshd[5334]: Accepted publickey for core from 10.0.0.1 port 46296 ssh2: RSA SHA256:Nb0q9jxD4EhyUGvlUh0tlGIiDz42DR960XQ2mSo6eQ4 Dec 16 12:59:16.460030 sshd-session[5334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:16.458000 audit[5334]: CRED_ACQ pid=5334 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:16.465042 systemd-logind[1584]: New session 16 of user core. Dec 16 12:59:16.468425 kernel: audit: type=1101 audit(1765889956.457:805): pid=5334 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:16.468482 kernel: audit: type=1103 audit(1765889956.458:806): pid=5334 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:16.468518 kernel: audit: type=1006 audit(1765889956.458:807): pid=5334 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 16 12:59:16.458000 audit[5334]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc02d04900 a2=3 a3=0 items=0 ppid=1 pid=5334 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:16.477173 kernel: audit: type=1300 audit(1765889956.458:807): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc02d04900 a2=3 a3=0 items=0 ppid=1 pid=5334 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:16.477215 kernel: audit: type=1327 audit(1765889956.458:807): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:16.458000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:16.491027 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 12:59:16.492000 audit[5334]: USER_START pid=5334 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:16.494000 audit[5337]: CRED_ACQ pid=5337 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:16.504843 kernel: audit: type=1105 audit(1765889956.492:808): pid=5334 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:16.504905 kernel: audit: type=1103 audit(1765889956.494:809): pid=5337 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:16.565962 sshd[5337]: Connection closed by 10.0.0.1 port 46296 Dec 16 12:59:16.566279 sshd-session[5334]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:16.566000 audit[5334]: USER_END pid=5334 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:16.571886 systemd[1]: sshd@15-10.0.0.102:22-10.0.0.1:46296.service: Deactivated successfully. Dec 16 12:59:16.567000 audit[5334]: CRED_DISP pid=5334 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:16.574810 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 12:59:16.576182 systemd-logind[1584]: Session 16 logged out. Waiting for processes to exit. Dec 16 12:59:16.577712 systemd-logind[1584]: Removed session 16. Dec 16 12:59:16.577913 kernel: audit: type=1106 audit(1765889956.566:810): pid=5334 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:16.577958 kernel: audit: type=1104 audit(1765889956.567:811): pid=5334 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:16.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.102:22-10.0.0.1:46296 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:16.796415 kubelet[2806]: E1216 12:59:16.796365 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c7fb6bf4b-pj7kl" podUID="ae64ff47-24ee-417f-a174-8a680294cf45" Dec 16 12:59:17.797454 kubelet[2806]: E1216 12:59:17.797238 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7767f7c484-gsgtz" podUID="d3d0f018-dbb1-4af4-8317-c55456bbf69e" Dec 16 12:59:18.796956 kubelet[2806]: E1216 12:59:18.796894 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77fbcbddcb-x6lxz" podUID="1c71642e-2c37-4a5d-aec0-8a0d6c89217c" Dec 16 12:59:19.797709 kubelet[2806]: E1216 12:59:19.797175 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7767f7c484-w77xw" podUID="e4c04278-05a3-4964-9b08-f5b05bcddf6d" Dec 16 12:59:21.583972 systemd[1]: Started sshd@16-10.0.0.102:22-10.0.0.1:38228.service - OpenSSH per-connection server daemon (10.0.0.1:38228). Dec 16 12:59:21.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.102:22-10.0.0.1:38228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:21.597568 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:59:21.597652 kernel: audit: type=1130 audit(1765889961.583:813): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.102:22-10.0.0.1:38228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:21.653000 audit[5355]: USER_ACCT pid=5355 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:21.654131 sshd[5355]: Accepted publickey for core from 10.0.0.1 port 38228 ssh2: RSA SHA256:Nb0q9jxD4EhyUGvlUh0tlGIiDz42DR960XQ2mSo6eQ4 Dec 16 12:59:21.656344 sshd-session[5355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:21.655000 audit[5355]: CRED_ACQ pid=5355 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:21.661993 systemd-logind[1584]: New session 17 of user core. Dec 16 12:59:21.665052 kernel: audit: type=1101 audit(1765889961.653:814): pid=5355 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:21.665128 kernel: audit: type=1103 audit(1765889961.655:815): pid=5355 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:21.665154 kernel: audit: type=1006 audit(1765889961.655:816): pid=5355 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 16 12:59:21.655000 audit[5355]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce3df5d40 a2=3 a3=0 items=0 ppid=1 pid=5355 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:21.674004 kernel: audit: type=1300 audit(1765889961.655:816): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce3df5d40 a2=3 a3=0 items=0 ppid=1 pid=5355 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:21.674129 kernel: audit: type=1327 audit(1765889961.655:816): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:21.655000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:21.682179 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 12:59:21.684000 audit[5355]: USER_START pid=5355 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:21.686000 audit[5358]: CRED_ACQ pid=5358 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:21.696065 kernel: audit: type=1105 audit(1765889961.684:817): pid=5355 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:21.696897 kernel: audit: type=1103 audit(1765889961.686:818): pid=5358 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:21.831641 sshd[5358]: Connection closed by 10.0.0.1 port 38228 Dec 16 12:59:21.832103 sshd-session[5355]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:21.832000 audit[5355]: USER_END pid=5355 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:21.839842 systemd[1]: sshd@16-10.0.0.102:22-10.0.0.1:38228.service: Deactivated successfully. Dec 16 12:59:21.833000 audit[5355]: CRED_DISP pid=5355 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:21.842092 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 12:59:21.843002 systemd-logind[1584]: Session 17 logged out. Waiting for processes to exit. Dec 16 12:59:21.844603 systemd-logind[1584]: Removed session 17. Dec 16 12:59:21.845677 kernel: audit: type=1106 audit(1765889961.832:819): pid=5355 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:21.845748 kernel: audit: type=1104 audit(1765889961.833:820): pid=5355 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:21.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.102:22-10.0.0.1:38228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:22.797403 kubelet[2806]: E1216 12:59:22.797351 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r5stm" podUID="e710a919-c171-452f-a8e0-220cab9661a8" Dec 16 12:59:23.798126 containerd[1605]: time="2025-12-16T12:59:23.798063297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:59:24.274439 containerd[1605]: time="2025-12-16T12:59:24.274385162Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:24.337707 containerd[1605]: time="2025-12-16T12:59:24.337634716Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:59:24.337707 containerd[1605]: time="2025-12-16T12:59:24.337699219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 12:59:24.337950 kubelet[2806]: E1216 12:59:24.337906 2806 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:59:24.338306 kubelet[2806]: E1216 12:59:24.337961 2806 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:59:24.338306 kubelet[2806]: E1216 12:59:24.338090 2806 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e22e1d47bcea42178d38e488f88370a3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8kgm8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c9b9b98c4-zxd2x_calico-system(0412d3e0-d5c8-47ca-9e38-144ba2ec1a92): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:24.340289 containerd[1605]: time="2025-12-16T12:59:24.340261075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:59:24.781067 containerd[1605]: time="2025-12-16T12:59:24.780991393Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:24.873106 containerd[1605]: time="2025-12-16T12:59:24.873061938Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 12:59:24.873535 containerd[1605]: time="2025-12-16T12:59:24.873135289Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:59:24.873569 kubelet[2806]: E1216 12:59:24.873375 2806 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:59:24.873569 kubelet[2806]: E1216 12:59:24.873421 2806 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:59:24.873569 kubelet[2806]: E1216 12:59:24.873527 2806 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8kgm8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c9b9b98c4-zxd2x_calico-system(0412d3e0-d5c8-47ca-9e38-144ba2ec1a92): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:24.874721 kubelet[2806]: E1216 12:59:24.874689 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c9b9b98c4-zxd2x" podUID="0412d3e0-d5c8-47ca-9e38-144ba2ec1a92" Dec 16 12:59:26.844880 systemd[1]: Started sshd@17-10.0.0.102:22-10.0.0.1:38232.service - OpenSSH per-connection server daemon (10.0.0.1:38232). Dec 16 12:59:26.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.102:22-10.0.0.1:38232 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:26.846183 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:59:26.846252 kernel: audit: type=1130 audit(1765889966.844:822): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.102:22-10.0.0.1:38232 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:26.911000 audit[5398]: USER_ACCT pid=5398 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:26.912502 sshd[5398]: Accepted publickey for core from 10.0.0.1 port 38232 ssh2: RSA SHA256:Nb0q9jxD4EhyUGvlUh0tlGIiDz42DR960XQ2mSo6eQ4 Dec 16 12:59:26.914536 sshd-session[5398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:26.913000 audit[5398]: CRED_ACQ pid=5398 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:26.920133 systemd-logind[1584]: New session 18 of user core. Dec 16 12:59:26.922194 kernel: audit: type=1101 audit(1765889966.911:823): pid=5398 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:26.922334 kernel: audit: type=1103 audit(1765889966.913:824): pid=5398 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:26.922361 kernel: audit: type=1006 audit(1765889966.913:825): pid=5398 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Dec 16 12:59:26.913000 audit[5398]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea66b69d0 a2=3 a3=0 items=0 ppid=1 pid=5398 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:26.930086 kernel: audit: type=1300 audit(1765889966.913:825): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea66b69d0 a2=3 a3=0 items=0 ppid=1 pid=5398 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:26.913000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:26.931160 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 12:59:26.932435 kernel: audit: type=1327 audit(1765889966.913:825): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:26.933000 audit[5398]: USER_START pid=5398 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:26.939871 kernel: audit: type=1105 audit(1765889966.933:826): pid=5398 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:26.935000 audit[5401]: CRED_ACQ pid=5401 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:26.944848 kernel: audit: type=1103 audit(1765889966.935:827): pid=5401 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:27.017209 sshd[5401]: Connection closed by 10.0.0.1 port 38232 Dec 16 12:59:27.017698 sshd-session[5398]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:27.018000 audit[5398]: USER_END pid=5398 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:27.018000 audit[5398]: CRED_DISP pid=5398 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:27.029174 kernel: audit: type=1106 audit(1765889967.018:828): pid=5398 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:27.029241 kernel: audit: type=1104 audit(1765889967.018:829): pid=5398 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:27.032793 systemd[1]: sshd@17-10.0.0.102:22-10.0.0.1:38232.service: Deactivated successfully. Dec 16 12:59:27.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.102:22-10.0.0.1:38232 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.034986 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 12:59:27.035998 systemd-logind[1584]: Session 18 logged out. Waiting for processes to exit. Dec 16 12:59:27.039673 systemd[1]: Started sshd@18-10.0.0.102:22-10.0.0.1:38246.service - OpenSSH per-connection server daemon (10.0.0.1:38246). Dec 16 12:59:27.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.102:22-10.0.0.1:38246 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.041434 systemd-logind[1584]: Removed session 18. Dec 16 12:59:27.102000 audit[5415]: USER_ACCT pid=5415 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:27.103493 sshd[5415]: Accepted publickey for core from 10.0.0.1 port 38246 ssh2: RSA SHA256:Nb0q9jxD4EhyUGvlUh0tlGIiDz42DR960XQ2mSo6eQ4 Dec 16 12:59:27.104000 audit[5415]: CRED_ACQ pid=5415 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:27.104000 audit[5415]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd01122ae0 a2=3 a3=0 items=0 ppid=1 pid=5415 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:27.104000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:27.105655 sshd-session[5415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:27.110018 systemd-logind[1584]: New session 19 of user core. Dec 16 12:59:27.116003 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 12:59:27.117000 audit[5415]: USER_START pid=5415 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:27.119000 audit[5418]: CRED_ACQ pid=5418 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:27.301924 sshd[5418]: Connection closed by 10.0.0.1 port 38246 Dec 16 12:59:27.304669 sshd-session[5415]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:27.305000 audit[5415]: USER_END pid=5415 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:27.305000 audit[5415]: CRED_DISP pid=5415 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:27.318015 systemd[1]: sshd@18-10.0.0.102:22-10.0.0.1:38246.service: Deactivated successfully. Dec 16 12:59:27.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.102:22-10.0.0.1:38246 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.320508 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 12:59:27.321369 systemd-logind[1584]: Session 19 logged out. Waiting for processes to exit. Dec 16 12:59:27.325239 systemd[1]: Started sshd@19-10.0.0.102:22-10.0.0.1:38258.service - OpenSSH per-connection server daemon (10.0.0.1:38258). Dec 16 12:59:27.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.102:22-10.0.0.1:38258 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:27.327216 systemd-logind[1584]: Removed session 19. Dec 16 12:59:27.386000 audit[5429]: USER_ACCT pid=5429 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:27.387440 sshd[5429]: Accepted publickey for core from 10.0.0.1 port 38258 ssh2: RSA SHA256:Nb0q9jxD4EhyUGvlUh0tlGIiDz42DR960XQ2mSo6eQ4 Dec 16 12:59:27.387000 audit[5429]: CRED_ACQ pid=5429 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:27.387000 audit[5429]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb62be3c0 a2=3 a3=0 items=0 ppid=1 pid=5429 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:27.387000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:27.389094 sshd-session[5429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:27.394561 systemd-logind[1584]: New session 20 of user core. Dec 16 12:59:27.404011 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 12:59:27.406000 audit[5429]: USER_START pid=5429 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:27.408000 audit[5432]: CRED_ACQ pid=5432 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:27.798290 containerd[1605]: time="2025-12-16T12:59:27.798223655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:59:28.178000 audit[5447]: NETFILTER_CFG table=filter:145 family=2 entries=26 op=nft_register_rule pid=5447 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:59:28.178000 audit[5447]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffd8e2f4980 a2=0 a3=7ffd8e2f496c items=0 ppid=2937 pid=5447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:28.178000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:59:28.184122 sshd[5432]: Connection closed by 10.0.0.1 port 38258 Dec 16 12:59:28.184415 sshd-session[5429]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:28.184000 audit[5447]: NETFILTER_CFG table=nat:146 family=2 entries=20 op=nft_register_rule pid=5447 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:59:28.184000 audit[5447]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd8e2f4980 a2=0 a3=0 items=0 ppid=2937 pid=5447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:28.184000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:59:28.187000 audit[5429]: USER_END pid=5429 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:28.187000 audit[5429]: CRED_DISP pid=5429 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:28.201558 systemd[1]: sshd@19-10.0.0.102:22-10.0.0.1:38258.service: Deactivated successfully. Dec 16 12:59:28.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.102:22-10.0.0.1:38258 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:28.205311 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 12:59:28.207489 systemd-logind[1584]: Session 20 logged out. Waiting for processes to exit. Dec 16 12:59:28.211957 systemd[1]: Started sshd@20-10.0.0.102:22-10.0.0.1:38266.service - OpenSSH per-connection server daemon (10.0.0.1:38266). Dec 16 12:59:28.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.102:22-10.0.0.1:38266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:28.213365 systemd-logind[1584]: Removed session 20. Dec 16 12:59:28.213000 audit[5454]: NETFILTER_CFG table=filter:147 family=2 entries=38 op=nft_register_rule pid=5454 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:59:28.213000 audit[5454]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc0882d010 a2=0 a3=7ffc0882cffc items=0 ppid=2937 pid=5454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:28.213000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:59:28.217000 audit[5454]: NETFILTER_CFG table=nat:148 family=2 entries=20 op=nft_register_rule pid=5454 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:59:28.217000 audit[5454]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc0882d010 a2=0 a3=0 items=0 ppid=2937 pid=5454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:28.217000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:59:28.237334 containerd[1605]: time="2025-12-16T12:59:28.237271111Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:28.240905 containerd[1605]: time="2025-12-16T12:59:28.240105399Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:59:28.240905 containerd[1605]: time="2025-12-16T12:59:28.240163118Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 12:59:28.241044 kubelet[2806]: E1216 12:59:28.240912 2806 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:59:28.241044 kubelet[2806]: E1216 12:59:28.240961 2806 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:59:28.241464 kubelet[2806]: E1216 12:59:28.241085 2806 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2zw4v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4lx5v_calico-system(4e3b9dec-c7ec-4533-9b5f-135d8bcc981d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:28.243634 containerd[1605]: time="2025-12-16T12:59:28.243597026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:59:28.266000 audit[5457]: USER_ACCT pid=5457 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:28.267466 sshd[5457]: Accepted publickey for core from 10.0.0.1 port 38266 ssh2: RSA SHA256:Nb0q9jxD4EhyUGvlUh0tlGIiDz42DR960XQ2mSo6eQ4 Dec 16 12:59:28.268000 audit[5457]: CRED_ACQ pid=5457 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:28.268000 audit[5457]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1b9e35e0 a2=3 a3=0 items=0 ppid=1 pid=5457 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:28.268000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:28.269491 sshd-session[5457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:28.274775 systemd-logind[1584]: New session 21 of user core. Dec 16 12:59:28.284044 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 12:59:28.285000 audit[5457]: USER_START pid=5457 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:28.287000 audit[5460]: CRED_ACQ pid=5460 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:28.462105 sshd[5460]: Connection closed by 10.0.0.1 port 38266 Dec 16 12:59:28.462188 sshd-session[5457]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:28.465000 audit[5457]: USER_END pid=5457 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:28.465000 audit[5457]: CRED_DISP pid=5457 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:28.472702 systemd[1]: sshd@20-10.0.0.102:22-10.0.0.1:38266.service: Deactivated successfully. Dec 16 12:59:28.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.102:22-10.0.0.1:38266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:28.475020 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 12:59:28.476657 systemd-logind[1584]: Session 21 logged out. Waiting for processes to exit. Dec 16 12:59:28.479338 systemd[1]: Started sshd@21-10.0.0.102:22-10.0.0.1:38276.service - OpenSSH per-connection server daemon (10.0.0.1:38276). Dec 16 12:59:28.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.102:22-10.0.0.1:38276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:28.480116 systemd-logind[1584]: Removed session 21. Dec 16 12:59:28.530000 audit[5472]: USER_ACCT pid=5472 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:28.531798 sshd[5472]: Accepted publickey for core from 10.0.0.1 port 38276 ssh2: RSA SHA256:Nb0q9jxD4EhyUGvlUh0tlGIiDz42DR960XQ2mSo6eQ4 Dec 16 12:59:28.532000 audit[5472]: CRED_ACQ pid=5472 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:28.532000 audit[5472]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffc7b29a40 a2=3 a3=0 items=0 ppid=1 pid=5472 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:28.532000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:28.533433 sshd-session[5472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:28.537866 systemd-logind[1584]: New session 22 of user core. Dec 16 12:59:28.552087 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 12:59:28.553000 audit[5472]: USER_START pid=5472 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:28.555000 audit[5475]: CRED_ACQ pid=5475 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:28.603991 containerd[1605]: time="2025-12-16T12:59:28.603854694Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:28.605659 containerd[1605]: time="2025-12-16T12:59:28.605569203Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:59:28.605959 containerd[1605]: time="2025-12-16T12:59:28.605666778Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 12:59:28.606063 kubelet[2806]: E1216 12:59:28.605983 2806 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:59:28.606117 kubelet[2806]: E1216 12:59:28.606074 2806 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:59:28.606395 kubelet[2806]: E1216 12:59:28.606347 2806 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2zw4v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4lx5v_calico-system(4e3b9dec-c7ec-4533-9b5f-135d8bcc981d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:28.607547 kubelet[2806]: E1216 12:59:28.607486 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4lx5v" podUID="4e3b9dec-c7ec-4533-9b5f-135d8bcc981d" Dec 16 12:59:28.619065 sshd[5475]: Connection closed by 10.0.0.1 port 38276 Dec 16 12:59:28.619388 sshd-session[5472]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:28.619000 audit[5472]: USER_END pid=5472 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:28.620000 audit[5472]: CRED_DISP pid=5472 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:28.623636 systemd[1]: sshd@21-10.0.0.102:22-10.0.0.1:38276.service: Deactivated successfully. Dec 16 12:59:28.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.102:22-10.0.0.1:38276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:28.625899 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 12:59:28.627893 systemd-logind[1584]: Session 22 logged out. Waiting for processes to exit. Dec 16 12:59:28.629311 systemd-logind[1584]: Removed session 22. Dec 16 12:59:30.796849 kubelet[2806]: E1216 12:59:30.796794 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:59:31.796299 kubelet[2806]: E1216 12:59:31.796245 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:59:31.797700 containerd[1605]: time="2025-12-16T12:59:31.797530301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:59:32.141394 containerd[1605]: time="2025-12-16T12:59:32.141258964Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:32.176331 containerd[1605]: time="2025-12-16T12:59:32.176279678Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:59:32.176391 containerd[1605]: time="2025-12-16T12:59:32.176358066Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:59:32.176514 kubelet[2806]: E1216 12:59:32.176477 2806 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:59:32.176848 kubelet[2806]: E1216 12:59:32.176525 2806 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:59:32.176848 kubelet[2806]: E1216 12:59:32.176744 2806 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rv5ck,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c7fb6bf4b-pj7kl_calico-apiserver(ae64ff47-24ee-417f-a174-8a680294cf45): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:32.176979 containerd[1605]: time="2025-12-16T12:59:32.176924322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:59:32.177941 kubelet[2806]: E1216 12:59:32.177895 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c7fb6bf4b-pj7kl" podUID="ae64ff47-24ee-417f-a174-8a680294cf45" Dec 16 12:59:32.482297 containerd[1605]: time="2025-12-16T12:59:32.482170018Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:32.520196 containerd[1605]: time="2025-12-16T12:59:32.520143127Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:59:32.520196 containerd[1605]: time="2025-12-16T12:59:32.520181901Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:59:32.520411 kubelet[2806]: E1216 12:59:32.520367 2806 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:59:32.520452 kubelet[2806]: E1216 12:59:32.520416 2806 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:59:32.520570 kubelet[2806]: E1216 12:59:32.520532 2806 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2wv5p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7767f7c484-gsgtz_calico-apiserver(d3d0f018-dbb1-4af4-8317-c55456bbf69e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:32.521954 kubelet[2806]: E1216 12:59:32.521894 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7767f7c484-gsgtz" podUID="d3d0f018-dbb1-4af4-8317-c55456bbf69e" Dec 16 12:59:33.261000 audit[5490]: NETFILTER_CFG table=filter:149 family=2 entries=26 op=nft_register_rule pid=5490 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:59:33.264782 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 16 12:59:33.264872 kernel: audit: type=1325 audit(1765889973.261:871): table=filter:149 family=2 entries=26 op=nft_register_rule pid=5490 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:59:33.261000 audit[5490]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe7470cca0 a2=0 a3=7ffe7470cc8c items=0 ppid=2937 pid=5490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:33.275315 kernel: audit: type=1300 audit(1765889973.261:871): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe7470cca0 a2=0 a3=7ffe7470cc8c items=0 ppid=2937 pid=5490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:33.261000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:59:33.278642 kernel: audit: type=1327 audit(1765889973.261:871): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:59:33.276000 audit[5490]: NETFILTER_CFG table=nat:150 family=2 entries=104 op=nft_register_chain pid=5490 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:59:33.276000 audit[5490]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffe7470cca0 a2=0 a3=7ffe7470cc8c items=0 ppid=2937 pid=5490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:33.289744 kernel: audit: type=1325 audit(1765889973.276:872): table=nat:150 family=2 entries=104 op=nft_register_chain pid=5490 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 12:59:33.289799 kernel: audit: type=1300 audit(1765889973.276:872): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffe7470cca0 a2=0 a3=7ffe7470cc8c items=0 ppid=2937 pid=5490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:33.289844 kernel: audit: type=1327 audit(1765889973.276:872): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:59:33.276000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 12:59:33.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.102:22-10.0.0.1:49640 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:33.631462 systemd[1]: Started sshd@22-10.0.0.102:22-10.0.0.1:49640.service - OpenSSH per-connection server daemon (10.0.0.1:49640). Dec 16 12:59:33.636853 kernel: audit: type=1130 audit(1765889973.629:873): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.102:22-10.0.0.1:49640 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:33.683000 audit[5492]: USER_ACCT pid=5492 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:33.685421 sshd[5492]: Accepted publickey for core from 10.0.0.1 port 49640 ssh2: RSA SHA256:Nb0q9jxD4EhyUGvlUh0tlGIiDz42DR960XQ2mSo6eQ4 Dec 16 12:59:33.686678 sshd-session[5492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:33.684000 audit[5492]: CRED_ACQ pid=5492 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:33.691231 systemd-logind[1584]: New session 23 of user core. Dec 16 12:59:33.694890 kernel: audit: type=1101 audit(1765889973.683:874): pid=5492 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:33.694935 kernel: audit: type=1103 audit(1765889973.684:875): pid=5492 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:33.695346 kernel: audit: type=1006 audit(1765889973.684:876): pid=5492 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 16 12:59:33.684000 audit[5492]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff360b8c50 a2=3 a3=0 items=0 ppid=1 pid=5492 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:33.684000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:33.707011 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 12:59:33.707000 audit[5492]: USER_START pid=5492 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:33.709000 audit[5495]: CRED_ACQ pid=5495 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:33.772194 sshd[5495]: Connection closed by 10.0.0.1 port 49640 Dec 16 12:59:33.772509 sshd-session[5492]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:33.772000 audit[5492]: USER_END pid=5492 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:33.772000 audit[5492]: CRED_DISP pid=5492 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:33.777626 systemd[1]: sshd@22-10.0.0.102:22-10.0.0.1:49640.service: Deactivated successfully. Dec 16 12:59:33.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.102:22-10.0.0.1:49640 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:33.779986 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 12:59:33.781148 systemd-logind[1584]: Session 23 logged out. Waiting for processes to exit. Dec 16 12:59:33.782812 systemd-logind[1584]: Removed session 23. Dec 16 12:59:33.799741 containerd[1605]: time="2025-12-16T12:59:33.799163314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:59:34.144157 containerd[1605]: time="2025-12-16T12:59:34.144096445Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:34.145284 containerd[1605]: time="2025-12-16T12:59:34.145237701Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:59:34.145373 containerd[1605]: time="2025-12-16T12:59:34.145294218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 12:59:34.145511 kubelet[2806]: E1216 12:59:34.145458 2806 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:59:34.145931 kubelet[2806]: E1216 12:59:34.145522 2806 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:59:34.145931 kubelet[2806]: E1216 12:59:34.145712 2806 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j9mcc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-77fbcbddcb-x6lxz_calico-system(1c71642e-2c37-4a5d-aec0-8a0d6c89217c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:34.147991 kubelet[2806]: E1216 12:59:34.147940 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77fbcbddcb-x6lxz" podUID="1c71642e-2c37-4a5d-aec0-8a0d6c89217c" Dec 16 12:59:34.799393 containerd[1605]: time="2025-12-16T12:59:34.799306735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:59:35.197680 containerd[1605]: time="2025-12-16T12:59:35.197497575Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:35.198884 containerd[1605]: time="2025-12-16T12:59:35.198813772Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:59:35.199067 containerd[1605]: time="2025-12-16T12:59:35.198877713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 12:59:35.199099 kubelet[2806]: E1216 12:59:35.199060 2806 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:59:35.199432 kubelet[2806]: E1216 12:59:35.199109 2806 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:59:35.199432 kubelet[2806]: E1216 12:59:35.199249 2806 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c9wsx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7767f7c484-w77xw_calico-apiserver(e4c04278-05a3-4964-9b08-f5b05bcddf6d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:35.200456 kubelet[2806]: E1216 12:59:35.200400 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7767f7c484-w77xw" podUID="e4c04278-05a3-4964-9b08-f5b05bcddf6d" Dec 16 12:59:35.797723 kubelet[2806]: E1216 12:59:35.797666 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c9b9b98c4-zxd2x" podUID="0412d3e0-d5c8-47ca-9e38-144ba2ec1a92" Dec 16 12:59:37.797859 containerd[1605]: time="2025-12-16T12:59:37.797764638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:59:38.157143 containerd[1605]: time="2025-12-16T12:59:38.156993678Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:59:38.270646 containerd[1605]: time="2025-12-16T12:59:38.270580196Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:59:38.270646 containerd[1605]: time="2025-12-16T12:59:38.270641722Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 12:59:38.270837 kubelet[2806]: E1216 12:59:38.270728 2806 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:59:38.270837 kubelet[2806]: E1216 12:59:38.270764 2806 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:59:38.271175 kubelet[2806]: E1216 12:59:38.270909 2806 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vggkw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-r5stm_calico-system(e710a919-c171-452f-a8e0-220cab9661a8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:59:38.272040 kubelet[2806]: E1216 12:59:38.271991 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-r5stm" podUID="e710a919-c171-452f-a8e0-220cab9661a8" Dec 16 12:59:38.796364 kubelet[2806]: E1216 12:59:38.796324 2806 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 12:59:38.800558 systemd[1]: Started sshd@23-10.0.0.102:22-10.0.0.1:49654.service - OpenSSH per-connection server daemon (10.0.0.1:49654). Dec 16 12:59:38.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.102:22-10.0.0.1:49654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:38.805901 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 12:59:38.806017 kernel: audit: type=1130 audit(1765889978.798:882): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.102:22-10.0.0.1:49654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:38.860000 audit[5509]: USER_ACCT pid=5509 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:38.862118 sshd[5509]: Accepted publickey for core from 10.0.0.1 port 49654 ssh2: RSA SHA256:Nb0q9jxD4EhyUGvlUh0tlGIiDz42DR960XQ2mSo6eQ4 Dec 16 12:59:38.864333 sshd-session[5509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:38.875297 kernel: audit: type=1101 audit(1765889978.860:883): pid=5509 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:38.875404 kernel: audit: type=1103 audit(1765889978.861:884): pid=5509 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:38.861000 audit[5509]: CRED_ACQ pid=5509 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:38.871461 systemd-logind[1584]: New session 24 of user core. Dec 16 12:59:38.879066 kernel: audit: type=1006 audit(1765889978.861:885): pid=5509 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 16 12:59:38.879109 kernel: audit: type=1300 audit(1765889978.861:885): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe99fe7080 a2=3 a3=0 items=0 ppid=1 pid=5509 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:38.861000 audit[5509]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe99fe7080 a2=3 a3=0 items=0 ppid=1 pid=5509 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:38.861000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:38.887227 kernel: audit: type=1327 audit(1765889978.861:885): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:38.891073 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 12:59:38.892000 audit[5509]: USER_START pid=5509 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:38.894000 audit[5512]: CRED_ACQ pid=5512 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:38.907945 kernel: audit: type=1105 audit(1765889978.892:886): pid=5509 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:38.908017 kernel: audit: type=1103 audit(1765889978.894:887): pid=5512 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:38.969975 sshd[5512]: Connection closed by 10.0.0.1 port 49654 Dec 16 12:59:38.970307 sshd-session[5509]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:38.969000 audit[5509]: USER_END pid=5509 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:38.975059 systemd[1]: sshd@23-10.0.0.102:22-10.0.0.1:49654.service: Deactivated successfully. Dec 16 12:59:38.978427 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 12:59:38.985188 kernel: audit: type=1106 audit(1765889978.969:888): pid=5509 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:38.985247 kernel: audit: type=1104 audit(1765889978.970:889): pid=5509 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:38.970000 audit[5509]: CRED_DISP pid=5509 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:38.982614 systemd-logind[1584]: Session 24 logged out. Waiting for processes to exit. Dec 16 12:59:38.983817 systemd-logind[1584]: Removed session 24. Dec 16 12:59:38.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.102:22-10.0.0.1:49654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:42.796920 kubelet[2806]: E1216 12:59:42.796851 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4lx5v" podUID="4e3b9dec-c7ec-4533-9b5f-135d8bcc981d" Dec 16 12:59:43.802866 kubelet[2806]: E1216 12:59:43.801534 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7767f7c484-gsgtz" podUID="d3d0f018-dbb1-4af4-8317-c55456bbf69e" Dec 16 12:59:43.990767 systemd[1]: Started sshd@24-10.0.0.102:22-10.0.0.1:60906.service - OpenSSH per-connection server daemon (10.0.0.1:60906). Dec 16 12:59:43.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.102:22-10.0.0.1:60906 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:43.992698 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 12:59:43.992792 kernel: audit: type=1130 audit(1765889983.990:891): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.102:22-10.0.0.1:60906 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:44.078000 audit[5528]: USER_ACCT pid=5528 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:44.079904 sshd[5528]: Accepted publickey for core from 10.0.0.1 port 60906 ssh2: RSA SHA256:Nb0q9jxD4EhyUGvlUh0tlGIiDz42DR960XQ2mSo6eQ4 Dec 16 12:59:44.081430 sshd-session[5528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:59:44.079000 audit[5528]: CRED_ACQ pid=5528 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:44.086392 systemd-logind[1584]: New session 25 of user core. Dec 16 12:59:44.089071 kernel: audit: type=1101 audit(1765889984.078:892): pid=5528 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:44.089190 kernel: audit: type=1103 audit(1765889984.079:893): pid=5528 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:44.089219 kernel: audit: type=1006 audit(1765889984.080:894): pid=5528 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 16 12:59:44.091785 kernel: audit: type=1300 audit(1765889984.080:894): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe57e077c0 a2=3 a3=0 items=0 ppid=1 pid=5528 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:44.080000 audit[5528]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe57e077c0 a2=3 a3=0 items=0 ppid=1 pid=5528 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 12:59:44.080000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:44.098023 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 12:59:44.098874 kernel: audit: type=1327 audit(1765889984.080:894): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 12:59:44.099000 audit[5528]: USER_START pid=5528 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:44.101000 audit[5531]: CRED_ACQ pid=5531 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:44.110844 kernel: audit: type=1105 audit(1765889984.099:895): pid=5528 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:44.110910 kernel: audit: type=1103 audit(1765889984.101:896): pid=5531 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:44.202781 sshd[5531]: Connection closed by 10.0.0.1 port 60906 Dec 16 12:59:44.203145 sshd-session[5528]: pam_unix(sshd:session): session closed for user core Dec 16 12:59:44.204000 audit[5528]: USER_END pid=5528 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:44.208391 systemd[1]: sshd@24-10.0.0.102:22-10.0.0.1:60906.service: Deactivated successfully. Dec 16 12:59:44.210768 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 12:59:44.204000 audit[5528]: CRED_DISP pid=5528 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:44.213589 systemd-logind[1584]: Session 25 logged out. Waiting for processes to exit. Dec 16 12:59:44.214562 systemd-logind[1584]: Removed session 25. Dec 16 12:59:44.217726 kernel: audit: type=1106 audit(1765889984.204:897): pid=5528 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:44.217789 kernel: audit: type=1104 audit(1765889984.204:898): pid=5528 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 12:59:44.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.102:22-10.0.0.1:60906 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 12:59:44.796683 kubelet[2806]: E1216 12:59:44.796616 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c7fb6bf4b-pj7kl" podUID="ae64ff47-24ee-417f-a174-8a680294cf45" Dec 16 12:59:45.797101 kubelet[2806]: E1216 12:59:45.796816 2806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77fbcbddcb-x6lxz" podUID="1c71642e-2c37-4a5d-aec0-8a0d6c89217c"