Dec 13 22:57:48.337937 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 22:57:48.337963 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Sat Dec 13 21:04:10 -00 2025 Dec 13 22:57:48.337971 kernel: KASLR enabled Dec 13 22:57:48.337977 kernel: efi: EFI v2.7 by EDK II Dec 13 22:57:48.337983 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Dec 13 22:57:48.337989 kernel: random: crng init done Dec 13 22:57:48.337996 kernel: secureboot: Secure boot disabled Dec 13 22:57:48.338002 kernel: ACPI: Early table checksum verification disabled Dec 13 22:57:48.338010 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Dec 13 22:57:48.338016 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 13 22:57:48.338022 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 22:57:48.338028 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 22:57:48.338034 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 22:57:48.338040 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 22:57:48.338049 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 22:57:48.338055 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 22:57:48.338062 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 22:57:48.338068 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 22:57:48.338075 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 22:57:48.338081 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 13 22:57:48.338088 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 13 22:57:48.338094 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 22:57:48.338102 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Dec 13 22:57:48.338108 kernel: Zone ranges: Dec 13 22:57:48.338115 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 22:57:48.338121 kernel: DMA32 empty Dec 13 22:57:48.338127 kernel: Normal empty Dec 13 22:57:48.338133 kernel: Device empty Dec 13 22:57:48.338140 kernel: Movable zone start for each node Dec 13 22:57:48.338146 kernel: Early memory node ranges Dec 13 22:57:48.338153 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Dec 13 22:57:48.338159 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Dec 13 22:57:48.338165 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Dec 13 22:57:48.338172 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Dec 13 22:57:48.338180 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Dec 13 22:57:48.338186 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Dec 13 22:57:48.338193 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Dec 13 22:57:48.338200 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Dec 13 22:57:48.338206 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Dec 13 22:57:48.338212 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Dec 13 22:57:48.338222 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Dec 13 22:57:48.338229 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Dec 13 22:57:48.338236 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 13 22:57:48.338243 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 22:57:48.338250 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 13 22:57:48.338256 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Dec 13 22:57:48.338264 kernel: psci: probing for conduit method from ACPI. Dec 13 22:57:48.338270 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 22:57:48.338279 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 22:57:48.338286 kernel: psci: Trusted OS migration not required Dec 13 22:57:48.338293 kernel: psci: SMC Calling Convention v1.1 Dec 13 22:57:48.338300 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 13 22:57:48.338307 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 13 22:57:48.338313 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 13 22:57:48.338320 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 13 22:57:48.338327 kernel: Detected PIPT I-cache on CPU0 Dec 13 22:57:48.338334 kernel: CPU features: detected: GIC system register CPU interface Dec 13 22:57:48.338341 kernel: CPU features: detected: Spectre-v4 Dec 13 22:57:48.338348 kernel: CPU features: detected: Spectre-BHB Dec 13 22:57:48.338356 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 22:57:48.338363 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 22:57:48.338370 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 22:57:48.338377 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 22:57:48.338384 kernel: alternatives: applying boot alternatives Dec 13 22:57:48.338392 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=44c63db9fd88171f565600c90d4cdf8b05fba369ef3a382917a5104525765913 Dec 13 22:57:48.338399 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 22:57:48.338406 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 22:57:48.338413 kernel: Fallback order for Node 0: 0 Dec 13 22:57:48.338420 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Dec 13 22:57:48.338429 kernel: Policy zone: DMA Dec 13 22:57:48.338436 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 22:57:48.338443 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Dec 13 22:57:48.338450 kernel: software IO TLB: area num 4. Dec 13 22:57:48.338457 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Dec 13 22:57:48.338464 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Dec 13 22:57:48.338471 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 22:57:48.338478 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 22:57:48.338486 kernel: rcu: RCU event tracing is enabled. Dec 13 22:57:48.338493 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 22:57:48.338505 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 22:57:48.338515 kernel: Tracing variant of Tasks RCU enabled. Dec 13 22:57:48.338522 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 22:57:48.338529 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 22:57:48.338536 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 22:57:48.338543 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 22:57:48.338550 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 22:57:48.338557 kernel: GICv3: 256 SPIs implemented Dec 13 22:57:48.338566 kernel: GICv3: 0 Extended SPIs implemented Dec 13 22:57:48.338574 kernel: Root IRQ handler: gic_handle_irq Dec 13 22:57:48.338581 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 22:57:48.338588 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 13 22:57:48.338598 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 13 22:57:48.338606 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 13 22:57:48.338616 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Dec 13 22:57:48.338631 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Dec 13 22:57:48.338638 kernel: GICv3: using LPI property table @0x0000000040130000 Dec 13 22:57:48.338645 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Dec 13 22:57:48.338652 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 22:57:48.338659 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 22:57:48.338666 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 22:57:48.338673 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 22:57:48.338683 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 22:57:48.338692 kernel: arm-pv: using stolen time PV Dec 13 22:57:48.338699 kernel: Console: colour dummy device 80x25 Dec 13 22:57:48.338707 kernel: ACPI: Core revision 20240827 Dec 13 22:57:48.338714 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 22:57:48.338721 kernel: pid_max: default: 32768 minimum: 301 Dec 13 22:57:48.338728 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 13 22:57:48.338736 kernel: landlock: Up and running. Dec 13 22:57:48.338744 kernel: SELinux: Initializing. Dec 13 22:57:48.338760 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 22:57:48.338768 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 22:57:48.338775 kernel: rcu: Hierarchical SRCU implementation. Dec 13 22:57:48.338783 kernel: rcu: Max phase no-delay instances is 400. Dec 13 22:57:48.338805 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 13 22:57:48.338812 kernel: Remapping and enabling EFI services. Dec 13 22:57:48.338820 kernel: smp: Bringing up secondary CPUs ... Dec 13 22:57:48.338829 kernel: Detected PIPT I-cache on CPU1 Dec 13 22:57:48.338842 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 13 22:57:48.338851 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Dec 13 22:57:48.338858 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 22:57:48.338866 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 22:57:48.338874 kernel: Detected PIPT I-cache on CPU2 Dec 13 22:57:48.338882 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 13 22:57:48.338891 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Dec 13 22:57:48.338899 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 22:57:48.338906 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 13 22:57:48.338914 kernel: Detected PIPT I-cache on CPU3 Dec 13 22:57:48.338921 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 13 22:57:48.338929 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Dec 13 22:57:48.338937 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 22:57:48.338945 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 13 22:57:48.338953 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 22:57:48.338960 kernel: SMP: Total of 4 processors activated. Dec 13 22:57:48.338968 kernel: CPU: All CPU(s) started at EL1 Dec 13 22:57:48.338975 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 22:57:48.338984 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 22:57:48.338992 kernel: CPU features: detected: Common not Private translations Dec 13 22:57:48.339001 kernel: CPU features: detected: CRC32 instructions Dec 13 22:57:48.339008 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 13 22:57:48.339016 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 22:57:48.339026 kernel: CPU features: detected: LSE atomic instructions Dec 13 22:57:48.339033 kernel: CPU features: detected: Privileged Access Never Dec 13 22:57:48.339041 kernel: CPU features: detected: RAS Extension Support Dec 13 22:57:48.339048 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 13 22:57:48.339056 kernel: alternatives: applying system-wide alternatives Dec 13 22:57:48.339065 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Dec 13 22:57:48.339073 kernel: Memory: 2450848K/2572288K available (11200K kernel code, 2456K rwdata, 9084K rodata, 12480K init, 1038K bss, 99104K reserved, 16384K cma-reserved) Dec 13 22:57:48.339081 kernel: devtmpfs: initialized Dec 13 22:57:48.339088 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 22:57:48.339096 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 22:57:48.339104 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 22:57:48.339112 kernel: 0 pages in range for non-PLT usage Dec 13 22:57:48.339120 kernel: 515168 pages in range for PLT usage Dec 13 22:57:48.339128 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 22:57:48.339135 kernel: SMBIOS 3.0.0 present. Dec 13 22:57:48.339143 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Dec 13 22:57:48.339150 kernel: DMI: Memory slots populated: 1/1 Dec 13 22:57:48.339157 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 22:57:48.339165 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 22:57:48.339174 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 22:57:48.339182 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 22:57:48.339190 kernel: audit: initializing netlink subsys (disabled) Dec 13 22:57:48.339197 kernel: audit: type=2000 audit(0.016:1): state=initialized audit_enabled=0 res=1 Dec 13 22:57:48.339205 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 22:57:48.339212 kernel: cpuidle: using governor menu Dec 13 22:57:48.339220 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 22:57:48.339228 kernel: ASID allocator initialised with 32768 entries Dec 13 22:57:48.339236 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 22:57:48.339244 kernel: Serial: AMBA PL011 UART driver Dec 13 22:57:48.339251 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 22:57:48.339259 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 22:57:48.339266 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 22:57:48.339274 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 22:57:48.339281 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 22:57:48.339290 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 22:57:48.339298 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 22:57:48.339305 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 22:57:48.339312 kernel: ACPI: Added _OSI(Module Device) Dec 13 22:57:48.339320 kernel: ACPI: Added _OSI(Processor Device) Dec 13 22:57:48.339327 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 22:57:48.339335 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 22:57:48.339343 kernel: ACPI: Interpreter enabled Dec 13 22:57:48.339351 kernel: ACPI: Using GIC for interrupt routing Dec 13 22:57:48.339358 kernel: ACPI: MCFG table detected, 1 entries Dec 13 22:57:48.339366 kernel: ACPI: CPU0 has been hot-added Dec 13 22:57:48.339374 kernel: ACPI: CPU1 has been hot-added Dec 13 22:57:48.339381 kernel: ACPI: CPU2 has been hot-added Dec 13 22:57:48.339388 kernel: ACPI: CPU3 has been hot-added Dec 13 22:57:48.339397 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 13 22:57:48.339405 kernel: printk: legacy console [ttyAMA0] enabled Dec 13 22:57:48.339412 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 22:57:48.339586 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 22:57:48.339692 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 22:57:48.339777 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 22:57:48.339872 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 13 22:57:48.339953 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 13 22:57:48.339963 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 13 22:57:48.339971 kernel: PCI host bridge to bus 0000:00 Dec 13 22:57:48.340059 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 13 22:57:48.340133 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 22:57:48.340208 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 13 22:57:48.340280 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 22:57:48.340384 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Dec 13 22:57:48.340481 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 13 22:57:48.340571 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Dec 13 22:57:48.340677 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Dec 13 22:57:48.340766 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 22:57:48.340858 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Dec 13 22:57:48.340941 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Dec 13 22:57:48.341027 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Dec 13 22:57:48.341104 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 13 22:57:48.341178 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 22:57:48.341254 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 13 22:57:48.341264 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 22:57:48.341272 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 22:57:48.341280 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 22:57:48.341288 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 22:57:48.341296 kernel: iommu: Default domain type: Translated Dec 13 22:57:48.341305 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 22:57:48.341314 kernel: efivars: Registered efivars operations Dec 13 22:57:48.341322 kernel: vgaarb: loaded Dec 13 22:57:48.341329 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 22:57:48.341337 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 22:57:48.341345 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 22:57:48.341352 kernel: pnp: PnP ACPI init Dec 13 22:57:48.341449 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 13 22:57:48.341460 kernel: pnp: PnP ACPI: found 1 devices Dec 13 22:57:48.341468 kernel: NET: Registered PF_INET protocol family Dec 13 22:57:48.341475 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 22:57:48.341483 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 22:57:48.341491 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 22:57:48.341499 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 22:57:48.341508 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 22:57:48.341516 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 22:57:48.341524 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 22:57:48.341532 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 22:57:48.341539 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 22:57:48.341547 kernel: PCI: CLS 0 bytes, default 64 Dec 13 22:57:48.341554 kernel: kvm [1]: HYP mode not available Dec 13 22:57:48.341564 kernel: Initialise system trusted keyrings Dec 13 22:57:48.341571 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 22:57:48.341579 kernel: Key type asymmetric registered Dec 13 22:57:48.341586 kernel: Asymmetric key parser 'x509' registered Dec 13 22:57:48.341594 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 22:57:48.341601 kernel: io scheduler mq-deadline registered Dec 13 22:57:48.341609 kernel: io scheduler kyber registered Dec 13 22:57:48.341618 kernel: io scheduler bfq registered Dec 13 22:57:48.341895 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 22:57:48.342262 kernel: ACPI: button: Power Button [PWRB] Dec 13 22:57:48.342277 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 22:57:48.342423 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 13 22:57:48.342436 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 22:57:48.342444 kernel: thunder_xcv, ver 1.0 Dec 13 22:57:48.342458 kernel: thunder_bgx, ver 1.0 Dec 13 22:57:48.342466 kernel: nicpf, ver 1.0 Dec 13 22:57:48.342474 kernel: nicvf, ver 1.0 Dec 13 22:57:48.342578 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 22:57:48.342679 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-13T22:57:47 UTC (1765666667) Dec 13 22:57:48.342691 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 22:57:48.342701 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 13 22:57:48.342709 kernel: watchdog: NMI not fully supported Dec 13 22:57:48.342717 kernel: watchdog: Hard watchdog permanently disabled Dec 13 22:57:48.342725 kernel: NET: Registered PF_INET6 protocol family Dec 13 22:57:48.342732 kernel: Segment Routing with IPv6 Dec 13 22:57:48.342740 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 22:57:48.342747 kernel: NET: Registered PF_PACKET protocol family Dec 13 22:57:48.342755 kernel: Key type dns_resolver registered Dec 13 22:57:48.342764 kernel: registered taskstats version 1 Dec 13 22:57:48.342773 kernel: Loading compiled-in X.509 certificates Dec 13 22:57:48.342781 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: d89c978154dbb01b4a4598f2db878f2ea4aca29d' Dec 13 22:57:48.342800 kernel: Demotion targets for Node 0: null Dec 13 22:57:48.342809 kernel: Key type .fscrypt registered Dec 13 22:57:48.342816 kernel: Key type fscrypt-provisioning registered Dec 13 22:57:48.342824 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 22:57:48.342834 kernel: ima: Allocated hash algorithm: sha1 Dec 13 22:57:48.342841 kernel: ima: No architecture policies found Dec 13 22:57:48.342849 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 22:57:48.342857 kernel: clk: Disabling unused clocks Dec 13 22:57:48.342865 kernel: PM: genpd: Disabling unused power domains Dec 13 22:57:48.342873 kernel: Freeing unused kernel memory: 12480K Dec 13 22:57:48.342880 kernel: Run /init as init process Dec 13 22:57:48.342890 kernel: with arguments: Dec 13 22:57:48.342897 kernel: /init Dec 13 22:57:48.342905 kernel: with environment: Dec 13 22:57:48.342913 kernel: HOME=/ Dec 13 22:57:48.342921 kernel: TERM=linux Dec 13 22:57:48.343037 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 13 22:57:48.343119 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Dec 13 22:57:48.343132 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 22:57:48.343140 kernel: GPT:16515071 != 27000831 Dec 13 22:57:48.343148 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 22:57:48.343155 kernel: GPT:16515071 != 27000831 Dec 13 22:57:48.343162 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 22:57:48.343170 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 22:57:48.343180 kernel: SCSI subsystem initialized Dec 13 22:57:48.343188 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 22:57:48.343196 kernel: device-mapper: uevent: version 1.0.3 Dec 13 22:57:48.343204 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 13 22:57:48.343212 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 13 22:57:48.343219 kernel: raid6: neonx8 gen() 15722 MB/s Dec 13 22:57:48.343227 kernel: raid6: neonx4 gen() 15704 MB/s Dec 13 22:57:48.343236 kernel: raid6: neonx2 gen() 13173 MB/s Dec 13 22:57:48.343244 kernel: raid6: neonx1 gen() 10403 MB/s Dec 13 22:57:48.343252 kernel: raid6: int64x8 gen() 6824 MB/s Dec 13 22:57:48.343260 kernel: raid6: int64x4 gen() 7346 MB/s Dec 13 22:57:48.343267 kernel: raid6: int64x2 gen() 6101 MB/s Dec 13 22:57:48.343275 kernel: raid6: int64x1 gen() 5014 MB/s Dec 13 22:57:48.343283 kernel: raid6: using algorithm neonx8 gen() 15722 MB/s Dec 13 22:57:48.343292 kernel: raid6: .... xor() 11790 MB/s, rmw enabled Dec 13 22:57:48.343300 kernel: raid6: using neon recovery algorithm Dec 13 22:57:48.343308 kernel: xor: measuring software checksum speed Dec 13 22:57:48.343316 kernel: 8regs : 21641 MB/sec Dec 13 22:57:48.343323 kernel: 32regs : 21687 MB/sec Dec 13 22:57:48.343331 kernel: arm64_neon : 26348 MB/sec Dec 13 22:57:48.343338 kernel: xor: using function: arm64_neon (26348 MB/sec) Dec 13 22:57:48.343346 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 22:57:48.343356 kernel: BTRFS: device fsid a1686a6f-a50a-4e68-84e0-ea41bcdb127c devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (205) Dec 13 22:57:48.343365 kernel: BTRFS info (device dm-0): first mount of filesystem a1686a6f-a50a-4e68-84e0-ea41bcdb127c Dec 13 22:57:48.343373 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 22:57:48.343380 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 22:57:48.343388 kernel: BTRFS info (device dm-0): enabling free space tree Dec 13 22:57:48.343396 kernel: loop: module loaded Dec 13 22:57:48.343404 kernel: loop0: detected capacity change from 0 to 91832 Dec 13 22:57:48.343413 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 22:57:48.343422 systemd[1]: Successfully made /usr/ read-only. Dec 13 22:57:48.343433 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 13 22:57:48.343442 systemd[1]: Detected virtualization kvm. Dec 13 22:57:48.343450 systemd[1]: Detected architecture arm64. Dec 13 22:57:48.343459 systemd[1]: Running in initrd. Dec 13 22:57:48.343467 systemd[1]: No hostname configured, using default hostname. Dec 13 22:57:48.343476 systemd[1]: Hostname set to . Dec 13 22:57:48.343484 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 13 22:57:48.343492 systemd[1]: Queued start job for default target initrd.target. Dec 13 22:57:48.343501 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 13 22:57:48.343509 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 22:57:48.343519 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 22:57:48.343529 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 22:57:48.343538 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 22:57:48.343547 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 22:57:48.343555 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 22:57:48.343565 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 22:57:48.343574 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 22:57:48.343582 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 13 22:57:48.343591 systemd[1]: Reached target paths.target - Path Units. Dec 13 22:57:48.343599 systemd[1]: Reached target slices.target - Slice Units. Dec 13 22:57:48.343607 systemd[1]: Reached target swap.target - Swaps. Dec 13 22:57:48.343615 systemd[1]: Reached target timers.target - Timer Units. Dec 13 22:57:48.343660 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 22:57:48.343669 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 22:57:48.343677 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 13 22:57:48.343686 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 22:57:48.343702 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 13 22:57:48.343714 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 22:57:48.343723 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 22:57:48.343733 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 22:57:48.343742 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 22:57:48.343751 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 22:57:48.343760 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 22:57:48.343768 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 22:57:48.343778 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 22:57:48.343795 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 13 22:57:48.343804 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 22:57:48.343813 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 22:57:48.343822 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 22:57:48.343833 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 22:57:48.343841 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 22:57:48.343867 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 22:57:48.343876 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 22:57:48.343885 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 22:57:48.343919 systemd-journald[347]: Collecting audit messages is enabled. Dec 13 22:57:48.343940 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 22:57:48.343948 kernel: Bridge firewalling registered Dec 13 22:57:48.343959 systemd-journald[347]: Journal started Dec 13 22:57:48.343978 systemd-journald[347]: Runtime Journal (/run/log/journal/befde9a370684f3caa45df8bc0fcf071) is 6M, max 48.5M, 42.4M free. Dec 13 22:57:48.344021 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 22:57:48.342955 systemd-modules-load[348]: Inserted module 'br_netfilter' Dec 13 22:57:48.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:48.347651 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 22:57:48.347675 kernel: audit: type=1130 audit(1765666668.346:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:48.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:48.353683 kernel: audit: type=1130 audit(1765666668.349:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:48.353270 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 22:57:48.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:48.357591 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 22:57:48.358772 kernel: audit: type=1130 audit(1765666668.353:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:48.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:48.362672 kernel: audit: type=1130 audit(1765666668.359:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:48.362694 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 22:57:48.364364 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 22:57:48.366153 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 22:57:48.373306 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 22:57:48.383929 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 22:57:48.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:48.388688 kernel: audit: type=1130 audit(1765666668.384:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:48.384776 systemd-tmpfiles[368]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 13 22:57:48.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:48.388373 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 22:57:48.393972 kernel: audit: type=1130 audit(1765666668.389:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:48.393968 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 22:57:48.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:48.397000 audit: BPF prog-id=6 op=LOAD Dec 13 22:57:48.398745 kernel: audit: type=1130 audit(1765666668.394:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:48.398781 kernel: audit: type=1334 audit(1765666668.397:9): prog-id=6 op=LOAD Dec 13 22:57:48.398693 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 22:57:48.399809 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 22:57:48.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:48.402587 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 22:57:48.406312 kernel: audit: type=1130 audit(1765666668.400:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:48.427932 dracut-cmdline[386]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=44c63db9fd88171f565600c90d4cdf8b05fba369ef3a382917a5104525765913 Dec 13 22:57:48.449114 systemd-resolved[385]: Positive Trust Anchors: Dec 13 22:57:48.449131 systemd-resolved[385]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 22:57:48.449135 systemd-resolved[385]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 13 22:57:48.449165 systemd-resolved[385]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 22:57:48.477414 systemd-resolved[385]: Defaulting to hostname 'linux'. Dec 13 22:57:48.478312 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 22:57:48.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:48.479580 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 22:57:48.515663 kernel: Loading iSCSI transport class v2.0-870. Dec 13 22:57:48.528651 kernel: iscsi: registered transport (tcp) Dec 13 22:57:48.542662 kernel: iscsi: registered transport (qla4xxx) Dec 13 22:57:48.542684 kernel: QLogic iSCSI HBA Driver Dec 13 22:57:48.563147 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 22:57:48.592816 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 22:57:48.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:48.594951 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 22:57:48.646463 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 22:57:48.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:48.649811 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 22:57:48.651310 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 22:57:48.689686 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 22:57:48.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:48.690000 audit: BPF prog-id=7 op=LOAD Dec 13 22:57:48.690000 audit: BPF prog-id=8 op=LOAD Dec 13 22:57:48.692219 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 22:57:48.724588 systemd-udevd[626]: Using default interface naming scheme 'v257'. Dec 13 22:57:48.736829 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 22:57:48.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:48.739917 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 22:57:48.762989 dracut-pre-trigger[699]: rd.md=0: removing MD RAID activation Dec 13 22:57:48.767682 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 22:57:48.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:48.769000 audit: BPF prog-id=9 op=LOAD Dec 13 22:57:48.770403 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 22:57:48.789403 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 22:57:48.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:48.791980 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 22:57:48.818579 systemd-networkd[744]: lo: Link UP Dec 13 22:57:48.818588 systemd-networkd[744]: lo: Gained carrier Dec 13 22:57:48.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:48.819466 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 22:57:48.820602 systemd[1]: Reached target network.target - Network. Dec 13 22:57:48.859259 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 22:57:48.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:48.865023 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 22:57:48.899360 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 22:57:48.914367 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 22:57:48.927929 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 22:57:48.940191 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 22:57:48.942181 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 22:57:48.963024 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 22:57:48.963162 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 22:57:48.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:48.965349 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 22:57:48.966295 systemd-networkd[744]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 13 22:57:48.970774 disk-uuid[802]: Primary Header is updated. Dec 13 22:57:48.970774 disk-uuid[802]: Secondary Entries is updated. Dec 13 22:57:48.970774 disk-uuid[802]: Secondary Header is updated. Dec 13 22:57:48.966299 systemd-networkd[744]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 22:57:48.967551 systemd-networkd[744]: eth0: Link UP Dec 13 22:57:48.967986 systemd-networkd[744]: eth0: Gained carrier Dec 13 22:57:48.967998 systemd-networkd[744]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 13 22:57:48.969855 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 22:57:48.983718 systemd-networkd[744]: eth0: DHCPv4 address 10.0.0.10/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 22:57:49.001976 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 22:57:49.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:49.043722 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 22:57:49.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:49.045205 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 22:57:49.046508 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 22:57:49.048291 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 22:57:49.050950 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 22:57:49.078813 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 22:57:49.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:50.002130 disk-uuid[804]: Warning: The kernel is still using the old partition table. Dec 13 22:57:50.002130 disk-uuid[804]: The new table will be used at the next reboot or after you Dec 13 22:57:50.002130 disk-uuid[804]: run partprobe(8) or kpartx(8) Dec 13 22:57:50.002130 disk-uuid[804]: The operation has completed successfully. Dec 13 22:57:50.008705 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 22:57:50.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:50.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:50.008831 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 22:57:50.011046 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 22:57:50.038689 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (831) Dec 13 22:57:50.038875 kernel: BTRFS info (device vda6): first mount of filesystem 76f8ce4f-b00d-437a-82ef-0e2eb08be73d Dec 13 22:57:50.040430 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 22:57:50.043666 kernel: BTRFS info (device vda6): turning on async discard Dec 13 22:57:50.043717 kernel: BTRFS info (device vda6): enabling free space tree Dec 13 22:57:50.049646 kernel: BTRFS info (device vda6): last unmount of filesystem 76f8ce4f-b00d-437a-82ef-0e2eb08be73d Dec 13 22:57:50.050179 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 22:57:50.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:50.054190 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 22:57:50.168284 ignition[850]: Ignition 2.24.0 Dec 13 22:57:50.168297 ignition[850]: Stage: fetch-offline Dec 13 22:57:50.168334 ignition[850]: no configs at "/usr/lib/ignition/base.d" Dec 13 22:57:50.168343 ignition[850]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 22:57:50.168486 ignition[850]: parsed url from cmdline: "" Dec 13 22:57:50.168489 ignition[850]: no config URL provided Dec 13 22:57:50.168495 ignition[850]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 22:57:50.168502 ignition[850]: no config at "/usr/lib/ignition/user.ign" Dec 13 22:57:50.168541 ignition[850]: op(1): [started] loading QEMU firmware config module Dec 13 22:57:50.168545 ignition[850]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 22:57:50.174130 ignition[850]: op(1): [finished] loading QEMU firmware config module Dec 13 22:57:50.217210 ignition[850]: parsing config with SHA512: d964cd2256ecb8a39ee63bb6ccc76800d2d5593f01dc649b0d731559c7cf78cbb58a28b6ceef9670bf8c1bb331ddbf9e0d17ce4587584f1ceef35598a0d8af8f Dec 13 22:57:50.221344 unknown[850]: fetched base config from "system" Dec 13 22:57:50.221356 unknown[850]: fetched user config from "qemu" Dec 13 22:57:50.221723 ignition[850]: fetch-offline: fetch-offline passed Dec 13 22:57:50.223387 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 22:57:50.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:50.221793 ignition[850]: Ignition finished successfully Dec 13 22:57:50.224935 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 22:57:50.225889 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 22:57:50.250122 ignition[862]: Ignition 2.24.0 Dec 13 22:57:50.250140 ignition[862]: Stage: kargs Dec 13 22:57:50.250302 ignition[862]: no configs at "/usr/lib/ignition/base.d" Dec 13 22:57:50.250313 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 22:57:50.251119 ignition[862]: kargs: kargs passed Dec 13 22:57:50.251168 ignition[862]: Ignition finished successfully Dec 13 22:57:50.256237 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 22:57:50.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:50.258168 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 22:57:50.287582 ignition[869]: Ignition 2.24.0 Dec 13 22:57:50.287602 ignition[869]: Stage: disks Dec 13 22:57:50.287809 ignition[869]: no configs at "/usr/lib/ignition/base.d" Dec 13 22:57:50.287819 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 22:57:50.288752 ignition[869]: disks: disks passed Dec 13 22:57:50.288812 ignition[869]: Ignition finished successfully Dec 13 22:57:50.291161 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 22:57:50.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:50.292775 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 22:57:50.294277 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 22:57:50.296108 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 22:57:50.297790 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 22:57:50.299314 systemd[1]: Reached target basic.target - Basic System. Dec 13 22:57:50.301827 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 22:57:50.341617 systemd-fsck[878]: ROOT: clean, 15/456736 files, 38230/456704 blocks Dec 13 22:57:50.346279 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 22:57:50.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:50.348981 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 22:57:50.426644 kernel: EXT4-fs (vda9): mounted filesystem b02592d5-55bb-4524-99a1-b54eb9e1980a r/w with ordered data mode. Quota mode: none. Dec 13 22:57:50.427479 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 22:57:50.428826 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 22:57:50.431248 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 22:57:50.432896 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 22:57:50.433837 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 22:57:50.433875 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 22:57:50.433907 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 22:57:50.453487 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 22:57:50.455705 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 22:57:50.461461 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (886) Dec 13 22:57:50.461490 kernel: BTRFS info (device vda6): first mount of filesystem 76f8ce4f-b00d-437a-82ef-0e2eb08be73d Dec 13 22:57:50.461501 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 22:57:50.465267 kernel: BTRFS info (device vda6): turning on async discard Dec 13 22:57:50.465339 kernel: BTRFS info (device vda6): enabling free space tree Dec 13 22:57:50.466384 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 22:57:50.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:50.590677 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 22:57:50.593155 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 22:57:50.595911 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 22:57:50.614841 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 22:57:50.619670 kernel: BTRFS info (device vda6): last unmount of filesystem 76f8ce4f-b00d-437a-82ef-0e2eb08be73d Dec 13 22:57:50.633749 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 22:57:50.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:50.645919 ignition[985]: INFO : Ignition 2.24.0 Dec 13 22:57:50.645919 ignition[985]: INFO : Stage: mount Dec 13 22:57:50.648687 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 22:57:50.648687 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 22:57:50.648687 ignition[985]: INFO : mount: mount passed Dec 13 22:57:50.648687 ignition[985]: INFO : Ignition finished successfully Dec 13 22:57:50.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:50.650336 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 22:57:50.653155 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 22:57:51.020893 systemd-networkd[744]: eth0: Gained IPv6LL Dec 13 22:57:51.429046 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 22:57:51.459610 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (997) Dec 13 22:57:51.459677 kernel: BTRFS info (device vda6): first mount of filesystem 76f8ce4f-b00d-437a-82ef-0e2eb08be73d Dec 13 22:57:51.459698 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 22:57:51.463260 kernel: BTRFS info (device vda6): turning on async discard Dec 13 22:57:51.463308 kernel: BTRFS info (device vda6): enabling free space tree Dec 13 22:57:51.464942 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 22:57:51.495989 ignition[1014]: INFO : Ignition 2.24.0 Dec 13 22:57:51.495989 ignition[1014]: INFO : Stage: files Dec 13 22:57:51.497477 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 22:57:51.497477 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 22:57:51.497477 ignition[1014]: DEBUG : files: compiled without relabeling support, skipping Dec 13 22:57:51.501045 ignition[1014]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 22:57:51.501045 ignition[1014]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 22:57:51.507040 ignition[1014]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 22:57:51.508284 ignition[1014]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 22:57:51.508284 ignition[1014]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 22:57:51.507617 unknown[1014]: wrote ssh authorized keys file for user: core Dec 13 22:57:51.511814 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 13 22:57:51.511814 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Dec 13 22:57:52.010995 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 22:57:53.334101 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 13 22:57:53.336270 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 22:57:53.336270 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 22:57:53.336270 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 22:57:53.336270 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 22:57:53.336270 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 22:57:53.336270 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 22:57:53.336270 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 22:57:53.336270 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 22:57:53.354107 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 22:57:53.354107 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 22:57:53.354107 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 13 22:57:53.354107 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 13 22:57:53.354107 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 13 22:57:53.354107 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Dec 13 22:57:53.777356 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 22:57:54.544939 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 13 22:57:54.544939 ignition[1014]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 22:57:54.549058 ignition[1014]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 22:57:54.549058 ignition[1014]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 22:57:54.549058 ignition[1014]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 22:57:54.549058 ignition[1014]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 13 22:57:54.549058 ignition[1014]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 22:57:54.549058 ignition[1014]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 22:57:54.549058 ignition[1014]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 13 22:57:54.549058 ignition[1014]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 22:57:54.591423 ignition[1014]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 22:57:54.598875 ignition[1014]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 22:57:54.600851 ignition[1014]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 22:57:54.600851 ignition[1014]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 13 22:57:54.600851 ignition[1014]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 22:57:54.600851 ignition[1014]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 22:57:54.600851 ignition[1014]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 22:57:54.600851 ignition[1014]: INFO : files: files passed Dec 13 22:57:54.600851 ignition[1014]: INFO : Ignition finished successfully Dec 13 22:57:54.622131 kernel: kauditd_printk_skb: 26 callbacks suppressed Dec 13 22:57:54.622160 kernel: audit: type=1130 audit(1765666674.605:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.603165 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 22:57:54.609822 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 22:57:54.622991 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 22:57:54.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.629699 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 22:57:54.638320 kernel: audit: type=1130 audit(1765666674.630:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.638349 kernel: audit: type=1131 audit(1765666674.630:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.629802 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 22:57:54.640493 initrd-setup-root-after-ignition[1045]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 22:57:54.644899 initrd-setup-root-after-ignition[1047]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 22:57:54.644899 initrd-setup-root-after-ignition[1047]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 22:57:54.648056 initrd-setup-root-after-ignition[1051]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 22:57:54.648549 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 22:57:54.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.650706 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 22:57:54.655401 kernel: audit: type=1130 audit(1765666674.650:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.655425 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 22:57:54.704818 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 22:57:54.704949 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 22:57:54.707008 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 22:57:54.716707 kernel: audit: type=1130 audit(1765666674.706:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.716737 kernel: audit: type=1131 audit(1765666674.706:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.715973 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 22:57:54.717691 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 22:57:54.720607 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 22:57:54.756697 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 22:57:54.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.760866 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 22:57:54.762909 kernel: audit: type=1130 audit(1765666674.757:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.793775 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 13 22:57:54.794012 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 22:57:54.795976 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 22:57:54.797862 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 22:57:54.799461 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 22:57:54.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.799608 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 22:57:54.804574 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 22:57:54.807478 kernel: audit: type=1131 audit(1765666674.800:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.806686 systemd[1]: Stopped target basic.target - Basic System. Dec 13 22:57:54.808351 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 22:57:54.810023 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 22:57:54.811608 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 22:57:54.813491 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 13 22:57:54.815293 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 22:57:54.817247 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 22:57:54.818885 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 22:57:54.820523 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 22:57:54.822360 systemd[1]: Stopped target swap.target - Swaps. Dec 13 22:57:54.823708 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 22:57:54.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.823856 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 22:57:54.831345 kernel: audit: type=1131 audit(1765666674.825:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.830503 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 22:57:54.832259 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 22:57:54.833844 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 22:57:54.834586 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 22:57:54.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.835605 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 22:57:54.841360 kernel: audit: type=1131 audit(1765666674.837:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.835750 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 22:57:54.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.840663 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 22:57:54.840809 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 22:57:54.842426 systemd[1]: Stopped target paths.target - Path Units. Dec 13 22:57:54.844007 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 22:57:54.844669 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 22:57:54.845790 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 22:57:54.847295 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 22:57:54.848974 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 22:57:54.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.849059 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 22:57:54.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.850497 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 22:57:54.850583 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 22:57:54.852597 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 13 22:57:54.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.852694 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 13 22:57:54.854167 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 22:57:54.854283 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 22:57:54.855950 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 22:57:54.856055 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 22:57:54.858388 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 22:57:54.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.859469 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 22:57:54.859585 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 22:57:54.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.870485 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 22:57:54.872564 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 22:57:54.872721 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 22:57:54.875553 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 22:57:54.875689 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 22:57:54.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.887825 ignition[1071]: INFO : Ignition 2.24.0 Dec 13 22:57:54.887825 ignition[1071]: INFO : Stage: umount Dec 13 22:57:54.887825 ignition[1071]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 22:57:54.887825 ignition[1071]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 22:57:54.887825 ignition[1071]: INFO : umount: umount passed Dec 13 22:57:54.887825 ignition[1071]: INFO : Ignition finished successfully Dec 13 22:57:54.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.877468 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 22:57:54.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.877574 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 22:57:54.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.883492 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 22:57:54.883577 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 22:57:54.887965 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 22:57:54.888067 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 22:57:54.890907 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 22:57:54.891343 systemd[1]: Stopped target network.target - Network. Dec 13 22:57:54.892221 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 22:57:54.892291 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 22:57:54.894420 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 22:57:54.894487 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 22:57:54.895971 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 22:57:54.896026 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 22:57:54.898332 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 22:57:54.898383 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 22:57:54.900740 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 22:57:54.906973 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 22:57:54.922436 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 22:57:54.922567 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 22:57:54.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.927543 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 22:57:54.928534 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 22:57:54.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.932495 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 13 22:57:54.932000 audit: BPF prog-id=6 op=UNLOAD Dec 13 22:57:54.933660 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 22:57:54.936000 audit: BPF prog-id=9 op=UNLOAD Dec 13 22:57:54.933711 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 22:57:54.937522 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 22:57:54.939265 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 22:57:54.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.939337 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 22:57:54.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.941265 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 22:57:54.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.941313 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 22:57:54.942901 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 22:57:54.942948 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 22:57:54.945197 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 22:57:54.946857 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 22:57:54.951773 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 22:57:54.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.953118 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 22:57:54.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.953210 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 22:57:54.956398 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 22:57:54.956561 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 22:57:54.959992 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 22:57:54.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.960036 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 22:57:54.961585 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 22:57:54.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.961615 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 22:57:54.963166 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 22:57:54.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.963220 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 22:57:54.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.965686 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 22:57:54.965733 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 22:57:54.967954 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 22:57:54.968004 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 22:57:54.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.971220 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 22:57:54.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.972298 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 13 22:57:54.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.972356 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 22:57:54.974106 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 22:57:54.974152 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 22:57:54.975965 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 22:57:54.976009 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 22:57:54.987376 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 22:57:54.987495 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 22:57:54.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.989554 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 22:57:54.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:54.989672 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 22:57:54.991363 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 22:57:54.994588 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 22:57:55.011711 systemd[1]: Switching root. Dec 13 22:57:55.041063 systemd-journald[347]: Journal stopped Dec 13 22:57:55.930603 systemd-journald[347]: Received SIGTERM from PID 1 (systemd). Dec 13 22:57:55.931063 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 22:57:55.931090 kernel: SELinux: policy capability open_perms=1 Dec 13 22:57:55.931101 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 22:57:55.931112 kernel: SELinux: policy capability always_check_network=0 Dec 13 22:57:55.931122 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 22:57:55.931132 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 22:57:55.931144 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 22:57:55.931156 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 22:57:55.931170 kernel: SELinux: policy capability userspace_initial_context=0 Dec 13 22:57:55.931181 systemd[1]: Successfully loaded SELinux policy in 60.749ms. Dec 13 22:57:55.931199 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.774ms. Dec 13 22:57:55.931211 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 13 22:57:55.931223 systemd[1]: Detected virtualization kvm. Dec 13 22:57:55.931238 systemd[1]: Detected architecture arm64. Dec 13 22:57:55.931250 systemd[1]: Detected first boot. Dec 13 22:57:55.931264 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 13 22:57:55.931276 zram_generator::config[1118]: No configuration found. Dec 13 22:57:55.931288 kernel: NET: Registered PF_VSOCK protocol family Dec 13 22:57:55.931298 systemd[1]: Populated /etc with preset unit settings. Dec 13 22:57:55.931312 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 22:57:55.931323 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 22:57:55.931334 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 22:57:55.931345 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 22:57:55.931356 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 22:57:55.931367 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 22:57:55.931382 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 22:57:55.931394 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 22:57:55.931406 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 22:57:55.931418 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 22:57:55.931429 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 22:57:55.931441 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 22:57:55.931455 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 22:57:55.931466 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 22:57:55.931479 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 22:57:55.931490 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 22:57:55.931501 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 22:57:55.931513 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 13 22:57:55.931524 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 22:57:55.931535 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 22:57:55.931547 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 22:57:55.931559 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 22:57:55.931571 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 22:57:55.931582 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 22:57:55.931593 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 22:57:55.931604 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 22:57:55.931616 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 13 22:57:55.931643 systemd[1]: Reached target slices.target - Slice Units. Dec 13 22:57:55.931656 systemd[1]: Reached target swap.target - Swaps. Dec 13 22:57:55.931667 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 22:57:55.931678 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 22:57:55.931690 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 13 22:57:55.931701 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 13 22:57:55.931712 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 13 22:57:55.931725 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 22:57:55.931736 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 13 22:57:55.931747 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 13 22:57:55.931768 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 22:57:55.931781 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 22:57:55.931792 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 22:57:55.931803 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 22:57:55.931814 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 22:57:55.931827 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 22:57:55.931839 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 22:57:55.931850 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 22:57:55.931862 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 22:57:55.931873 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 22:57:55.931885 systemd[1]: Reached target machines.target - Containers. Dec 13 22:57:55.931898 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 22:57:55.931910 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 22:57:55.931921 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 22:57:55.931933 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 22:57:55.932682 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 22:57:55.932722 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 22:57:55.932735 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 22:57:55.932752 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 22:57:55.932776 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 22:57:55.932789 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 22:57:55.932801 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 22:57:55.932812 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 22:57:55.932823 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 22:57:55.932835 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 22:57:55.932849 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 13 22:57:55.932861 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 22:57:55.932953 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 22:57:55.932971 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 22:57:55.932986 kernel: ACPI: bus type drm_connector registered Dec 13 22:57:55.932998 kernel: fuse: init (API version 7.41) Dec 13 22:57:55.933010 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 22:57:55.933021 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 13 22:57:55.933032 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 22:57:55.933044 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 22:57:55.933092 systemd-journald[1181]: Collecting audit messages is enabled. Dec 13 22:57:55.933120 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 22:57:55.933131 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 22:57:55.933143 systemd-journald[1181]: Journal started Dec 13 22:57:55.933165 systemd-journald[1181]: Runtime Journal (/run/log/journal/befde9a370684f3caa45df8bc0fcf071) is 6M, max 48.5M, 42.4M free. Dec 13 22:57:55.789000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 22:57:55.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:55.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:55.887000 audit: BPF prog-id=14 op=UNLOAD Dec 13 22:57:55.887000 audit: BPF prog-id=13 op=UNLOAD Dec 13 22:57:55.892000 audit: BPF prog-id=15 op=LOAD Dec 13 22:57:55.892000 audit: BPF prog-id=16 op=LOAD Dec 13 22:57:55.892000 audit: BPF prog-id=17 op=LOAD Dec 13 22:57:55.928000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 22:57:55.928000 audit[1181]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffe298cd20 a2=4000 a3=0 items=0 ppid=1 pid=1181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:57:55.928000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 22:57:55.698553 systemd[1]: Queued start job for default target multi-user.target. Dec 13 22:57:55.721981 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 22:57:55.722519 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 22:57:55.934904 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 22:57:55.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:55.936028 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 22:57:55.937185 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 22:57:55.938408 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 22:57:55.940638 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 22:57:55.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:55.942148 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 22:57:55.942408 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 22:57:55.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:55.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:55.943982 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 22:57:55.944253 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 22:57:55.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:55.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:55.945671 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 22:57:55.945839 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 22:57:55.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:55.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:55.947147 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 22:57:55.947325 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 22:57:55.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:55.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:55.948887 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 22:57:55.949092 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 22:57:55.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:55.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:55.950549 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 22:57:55.950754 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 22:57:55.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:55.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:55.952113 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 22:57:55.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:55.953649 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 22:57:55.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:55.955836 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 22:57:55.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:55.959783 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 13 22:57:55.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:55.962104 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 22:57:55.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:55.975064 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 22:57:55.976553 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 13 22:57:55.978937 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 22:57:55.980971 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 22:57:55.982014 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 22:57:55.982060 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 22:57:55.983920 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 13 22:57:55.985569 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 22:57:55.985712 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 13 22:57:55.987542 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 22:57:55.990865 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 22:57:55.991881 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 22:57:55.993042 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 22:57:55.994000 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 22:57:55.997827 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 22:57:55.999611 systemd-journald[1181]: Time spent on flushing to /var/log/journal/befde9a370684f3caa45df8bc0fcf071 is 20.098ms for 1000 entries. Dec 13 22:57:55.999611 systemd-journald[1181]: System Journal (/var/log/journal/befde9a370684f3caa45df8bc0fcf071) is 8M, max 163.5M, 155.5M free. Dec 13 22:57:56.029796 systemd-journald[1181]: Received client request to flush runtime journal. Dec 13 22:57:56.029851 kernel: loop1: detected capacity change from 0 to 207008 Dec 13 22:57:56.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:56.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:56.002918 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 22:57:56.005279 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 22:57:56.008698 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 22:57:56.010695 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 22:57:56.011837 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 22:57:56.014175 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 22:57:56.019557 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 22:57:56.022712 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 13 22:57:56.030859 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 22:57:56.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:56.034942 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 22:57:56.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:56.051724 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 22:57:56.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:56.054674 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 13 22:57:56.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:56.056000 audit: BPF prog-id=18 op=LOAD Dec 13 22:57:56.057000 audit: BPF prog-id=19 op=LOAD Dec 13 22:57:56.057000 audit: BPF prog-id=20 op=LOAD Dec 13 22:57:56.058468 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 13 22:57:56.060000 audit: BPF prog-id=21 op=LOAD Dec 13 22:57:56.061984 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 22:57:56.064658 kernel: loop2: detected capacity change from 0 to 353272 Dec 13 22:57:56.065367 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 22:57:56.067000 audit: BPF prog-id=22 op=LOAD Dec 13 22:57:56.067000 audit: BPF prog-id=23 op=LOAD Dec 13 22:57:56.067000 audit: BPF prog-id=24 op=LOAD Dec 13 22:57:56.069672 kernel: loop2: p1 p2 p3 Dec 13 22:57:56.078005 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 13 22:57:56.080000 audit: BPF prog-id=25 op=LOAD Dec 13 22:57:56.080000 audit: BPF prog-id=26 op=LOAD Dec 13 22:57:56.080000 audit: BPF prog-id=27 op=LOAD Dec 13 22:57:56.082955 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 22:57:56.089644 kernel: erofs: (device loop2p1): mounted with root inode @ nid 39. Dec 13 22:57:56.097854 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Dec 13 22:57:56.098187 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Dec 13 22:57:56.102293 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 22:57:56.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:56.109710 kernel: loop3: detected capacity change from 0 to 161080 Dec 13 22:57:56.110645 kernel: loop3: p1 p2 p3 Dec 13 22:57:56.115355 systemd-nsresourced[1251]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 13 22:57:56.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:56.115768 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 22:57:56.120922 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 13 22:57:56.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:56.130717 kernel: erofs: (device loop3p1): mounted with root inode @ nid 39. Dec 13 22:57:56.150660 kernel: loop4: detected capacity change from 0 to 207008 Dec 13 22:57:56.158878 kernel: loop5: detected capacity change from 0 to 353272 Dec 13 22:57:56.159048 kernel: loop5: p1 p2 p3 Dec 13 22:57:56.168715 systemd-oomd[1248]: No swap; memory pressure usage will be degraded Dec 13 22:57:56.169251 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 13 22:57:56.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:56.177194 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 13 22:57:56.177295 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Dec 13 22:57:56.177329 kernel: device-mapper: table: 253:1: verity: Unrecognized verity feature request (-EINVAL) Dec 13 22:57:56.178651 kernel: device-mapper: ioctl: error adding target to table Dec 13 22:57:56.178643 (sd-merge)[1272]: device-mapper: reload ioctl on b35b2492fcca387995ac7cc700425775891a7db9ed46359c680e82ec44f4021d-verity (253:1) failed: Invalid argument Dec 13 22:57:56.180242 systemd-resolved[1249]: Positive Trust Anchors: Dec 13 22:57:56.180264 systemd-resolved[1249]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 22:57:56.180268 systemd-resolved[1249]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 13 22:57:56.180302 systemd-resolved[1249]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 22:57:56.184352 systemd-resolved[1249]: Defaulting to hostname 'linux'. Dec 13 22:57:56.186145 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 22:57:56.186645 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 13 22:57:56.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:56.187502 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 22:57:56.471761 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 22:57:56.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:56.472000 audit: BPF prog-id=8 op=UNLOAD Dec 13 22:57:56.472000 audit: BPF prog-id=7 op=UNLOAD Dec 13 22:57:56.473000 audit: BPF prog-id=28 op=LOAD Dec 13 22:57:56.473000 audit: BPF prog-id=29 op=LOAD Dec 13 22:57:56.474578 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 22:57:56.514922 systemd-udevd[1279]: Using default interface naming scheme 'v257'. Dec 13 22:57:56.530456 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 22:57:56.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:56.532000 audit: BPF prog-id=30 op=LOAD Dec 13 22:57:56.534302 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 22:57:56.591590 systemd-networkd[1290]: lo: Link UP Dec 13 22:57:56.591599 systemd-networkd[1290]: lo: Gained carrier Dec 13 22:57:56.592490 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 22:57:56.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:56.594042 systemd[1]: Reached target network.target - Network. Dec 13 22:57:56.596062 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 13 22:57:56.598356 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 22:57:56.605482 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 13 22:57:56.614084 systemd-networkd[1290]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 13 22:57:56.614094 systemd-networkd[1290]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 22:57:56.615128 systemd-networkd[1290]: eth0: Link UP Dec 13 22:57:56.615250 systemd-networkd[1290]: eth0: Gained carrier Dec 13 22:57:56.615272 systemd-networkd[1290]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 13 22:57:56.623713 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 13 22:57:56.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:56.629603 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 22:57:56.629837 systemd-networkd[1290]: eth0: DHCPv4 address 10.0.0.10/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 22:57:56.632685 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 22:57:56.656176 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 22:57:56.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:56.712870 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 22:57:56.720653 kernel: erofs: (device dm-1): mounted with root inode @ nid 39. Dec 13 22:57:56.722668 kernel: loop6: detected capacity change from 0 to 161080 Dec 13 22:57:56.723635 kernel: loop6: p1 p2 p3 Dec 13 22:57:56.724785 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 22:57:56.737380 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 13 22:57:56.737469 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Dec 13 22:57:56.737489 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Dec 13 22:57:56.739710 kernel: device-mapper: ioctl: error adding target to table Dec 13 22:57:56.739633 (sd-merge)[1272]: device-mapper: reload ioctl on cf827620bc7ad537f83bb2a823378974b3cc077c207d7b04c642a58e7bc0ec99-verity (253:2) failed: Invalid argument Dec 13 22:57:56.746688 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 13 22:57:56.754850 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 22:57:56.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:56.765649 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Dec 13 22:57:56.765907 (sd-merge)[1272]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Dec 13 22:57:56.768990 (sd-merge)[1272]: Merged extensions into '/usr'. Dec 13 22:57:56.772187 systemd[1]: Reload requested from client PID 1232 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 22:57:56.772210 systemd[1]: Reloading... Dec 13 22:57:56.832677 zram_generator::config[1376]: No configuration found. Dec 13 22:57:57.008175 systemd[1]: Reloading finished in 235 ms. Dec 13 22:57:57.030882 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 22:57:57.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:57.044115 systemd[1]: Starting ensure-sysext.service... Dec 13 22:57:57.045997 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 22:57:57.046000 audit: BPF prog-id=31 op=LOAD Dec 13 22:57:57.046000 audit: BPF prog-id=32 op=LOAD Dec 13 22:57:57.046000 audit: BPF prog-id=28 op=UNLOAD Dec 13 22:57:57.046000 audit: BPF prog-id=29 op=UNLOAD Dec 13 22:57:57.047000 audit: BPF prog-id=33 op=LOAD Dec 13 22:57:57.047000 audit: BPF prog-id=22 op=UNLOAD Dec 13 22:57:57.047000 audit: BPF prog-id=34 op=LOAD Dec 13 22:57:57.047000 audit: BPF prog-id=35 op=LOAD Dec 13 22:57:57.047000 audit: BPF prog-id=23 op=UNLOAD Dec 13 22:57:57.047000 audit: BPF prog-id=24 op=UNLOAD Dec 13 22:57:57.048000 audit: BPF prog-id=36 op=LOAD Dec 13 22:57:57.048000 audit: BPF prog-id=30 op=UNLOAD Dec 13 22:57:57.048000 audit: BPF prog-id=37 op=LOAD Dec 13 22:57:57.048000 audit: BPF prog-id=15 op=UNLOAD Dec 13 22:57:57.049000 audit: BPF prog-id=38 op=LOAD Dec 13 22:57:57.049000 audit: BPF prog-id=39 op=LOAD Dec 13 22:57:57.049000 audit: BPF prog-id=16 op=UNLOAD Dec 13 22:57:57.049000 audit: BPF prog-id=17 op=UNLOAD Dec 13 22:57:57.049000 audit: BPF prog-id=40 op=LOAD Dec 13 22:57:57.049000 audit: BPF prog-id=25 op=UNLOAD Dec 13 22:57:57.050000 audit: BPF prog-id=41 op=LOAD Dec 13 22:57:57.050000 audit: BPF prog-id=42 op=LOAD Dec 13 22:57:57.050000 audit: BPF prog-id=26 op=UNLOAD Dec 13 22:57:57.050000 audit: BPF prog-id=27 op=UNLOAD Dec 13 22:57:57.050000 audit: BPF prog-id=43 op=LOAD Dec 13 22:57:57.050000 audit: BPF prog-id=21 op=UNLOAD Dec 13 22:57:57.051000 audit: BPF prog-id=44 op=LOAD Dec 13 22:57:57.051000 audit: BPF prog-id=18 op=UNLOAD Dec 13 22:57:57.051000 audit: BPF prog-id=45 op=LOAD Dec 13 22:57:57.051000 audit: BPF prog-id=46 op=LOAD Dec 13 22:57:57.051000 audit: BPF prog-id=19 op=UNLOAD Dec 13 22:57:57.051000 audit: BPF prog-id=20 op=UNLOAD Dec 13 22:57:57.057322 systemd[1]: Reload requested from client PID 1409 ('systemctl') (unit ensure-sysext.service)... Dec 13 22:57:57.057341 systemd[1]: Reloading... Dec 13 22:57:57.061013 systemd-tmpfiles[1410]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 13 22:57:57.061054 systemd-tmpfiles[1410]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 13 22:57:57.061449 systemd-tmpfiles[1410]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 22:57:57.062493 systemd-tmpfiles[1410]: ACLs are not supported, ignoring. Dec 13 22:57:57.062551 systemd-tmpfiles[1410]: ACLs are not supported, ignoring. Dec 13 22:57:57.066230 systemd-tmpfiles[1410]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 22:57:57.066244 systemd-tmpfiles[1410]: Skipping /boot Dec 13 22:57:57.072812 systemd-tmpfiles[1410]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 22:57:57.072829 systemd-tmpfiles[1410]: Skipping /boot Dec 13 22:57:57.120660 zram_generator::config[1444]: No configuration found. Dec 13 22:57:57.295497 systemd[1]: Reloading finished in 237 ms. Dec 13 22:57:57.316000 audit: BPF prog-id=47 op=LOAD Dec 13 22:57:57.316000 audit: BPF prog-id=33 op=UNLOAD Dec 13 22:57:57.316000 audit: BPF prog-id=48 op=LOAD Dec 13 22:57:57.316000 audit: BPF prog-id=49 op=LOAD Dec 13 22:57:57.316000 audit: BPF prog-id=34 op=UNLOAD Dec 13 22:57:57.316000 audit: BPF prog-id=35 op=UNLOAD Dec 13 22:57:57.317000 audit: BPF prog-id=50 op=LOAD Dec 13 22:57:57.317000 audit: BPF prog-id=44 op=UNLOAD Dec 13 22:57:57.317000 audit: BPF prog-id=51 op=LOAD Dec 13 22:57:57.317000 audit: BPF prog-id=52 op=LOAD Dec 13 22:57:57.317000 audit: BPF prog-id=45 op=UNLOAD Dec 13 22:57:57.317000 audit: BPF prog-id=46 op=UNLOAD Dec 13 22:57:57.317000 audit: BPF prog-id=53 op=LOAD Dec 13 22:57:57.317000 audit: BPF prog-id=37 op=UNLOAD Dec 13 22:57:57.317000 audit: BPF prog-id=54 op=LOAD Dec 13 22:57:57.317000 audit: BPF prog-id=55 op=LOAD Dec 13 22:57:57.317000 audit: BPF prog-id=38 op=UNLOAD Dec 13 22:57:57.318000 audit: BPF prog-id=39 op=UNLOAD Dec 13 22:57:57.318000 audit: BPF prog-id=56 op=LOAD Dec 13 22:57:57.318000 audit: BPF prog-id=40 op=UNLOAD Dec 13 22:57:57.318000 audit: BPF prog-id=57 op=LOAD Dec 13 22:57:57.318000 audit: BPF prog-id=58 op=LOAD Dec 13 22:57:57.318000 audit: BPF prog-id=41 op=UNLOAD Dec 13 22:57:57.318000 audit: BPF prog-id=42 op=UNLOAD Dec 13 22:57:57.319000 audit: BPF prog-id=59 op=LOAD Dec 13 22:57:57.319000 audit: BPF prog-id=43 op=UNLOAD Dec 13 22:57:57.319000 audit: BPF prog-id=60 op=LOAD Dec 13 22:57:57.333000 audit: BPF prog-id=36 op=UNLOAD Dec 13 22:57:57.333000 audit: BPF prog-id=61 op=LOAD Dec 13 22:57:57.333000 audit: BPF prog-id=62 op=LOAD Dec 13 22:57:57.333000 audit: BPF prog-id=31 op=UNLOAD Dec 13 22:57:57.333000 audit: BPF prog-id=32 op=UNLOAD Dec 13 22:57:57.336360 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 22:57:57.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:57.343871 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 22:57:57.345972 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 22:57:57.355323 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 22:57:57.357463 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 22:57:57.362864 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 22:57:57.366649 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 22:57:57.367689 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 22:57:57.370962 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 22:57:57.379889 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 22:57:57.381970 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 22:57:57.382166 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 13 22:57:57.382266 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 13 22:57:57.383375 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 22:57:57.385674 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 22:57:57.385000 audit[1484]: SYSTEM_BOOT pid=1484 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 22:57:57.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:57.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:57:57.394648 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 22:57:57.396816 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 22:57:57.398748 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 22:57:57.398983 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 13 22:57:57.399120 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 13 22:57:57.399000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 22:57:57.399000 audit[1507]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcbed1110 a2=420 a3=0 items=0 ppid=1479 pid=1507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:57:57.399000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 22:57:57.402678 augenrules[1507]: No rules Dec 13 22:57:57.402114 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 22:57:57.411200 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 22:57:57.414681 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 22:57:57.416493 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 22:57:57.418271 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 22:57:57.419972 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 22:57:57.420185 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 22:57:57.421929 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 22:57:57.422128 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 22:57:57.423675 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 22:57:57.423869 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 22:57:57.434551 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 22:57:57.436041 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 22:57:57.437335 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 22:57:57.448079 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 22:57:57.449976 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 22:57:57.452041 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 22:57:57.454783 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 22:57:57.454983 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 13 22:57:57.455080 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 13 22:57:57.455191 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 22:57:57.456433 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 22:57:57.457711 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 22:57:57.465180 augenrules[1520]: /sbin/augenrules: No change Dec 13 22:57:57.467945 systemd[1]: Finished ensure-sysext.service. Dec 13 22:57:57.470345 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 22:57:57.470558 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 22:57:57.472253 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 22:57:57.472483 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 22:57:57.473914 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 22:57:57.474704 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 22:57:57.477000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 13 22:57:57.477000 audit[1546]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe4ec6760 a2=420 a3=0 items=0 ppid=1520 pid=1546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:57:57.477000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 22:57:57.478000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 22:57:57.478000 audit[1546]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe4ec8be0 a2=420 a3=0 items=0 ppid=1520 pid=1546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:57:57.478000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 22:57:57.478978 augenrules[1546]: No rules Dec 13 22:57:57.480819 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 22:57:57.481101 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 22:57:57.482925 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 22:57:57.483021 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 22:57:57.485005 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 22:57:57.545609 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 22:57:57.960049 systemd-resolved[1249]: Clock change detected. Flushing caches. Dec 13 22:57:57.960065 systemd-timesyncd[1555]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 22:57:57.960107 systemd-timesyncd[1555]: Initial clock synchronization to Sat 2025-12-13 22:57:57.959967 UTC. Dec 13 22:57:57.960977 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 22:57:58.036525 ldconfig[1481]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 22:57:58.041136 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 22:57:58.044866 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 22:57:58.073694 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 22:57:58.074893 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 22:57:58.075903 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 22:57:58.076932 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 22:57:58.078131 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 22:57:58.079152 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 22:57:58.080286 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 13 22:57:58.081476 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 13 22:57:58.082447 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 22:57:58.083528 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 22:57:58.083577 systemd[1]: Reached target paths.target - Path Units. Dec 13 22:57:58.084304 systemd[1]: Reached target timers.target - Timer Units. Dec 13 22:57:58.085840 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 22:57:58.088141 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 22:57:58.090861 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 13 22:57:58.092173 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 13 22:57:58.093253 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 13 22:57:58.096406 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 22:57:58.097670 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 13 22:57:58.099252 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 22:57:58.100261 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 22:57:58.101081 systemd[1]: Reached target basic.target - Basic System. Dec 13 22:57:58.101886 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 22:57:58.101919 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 22:57:58.103046 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 22:57:58.105145 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 22:57:58.107057 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 22:57:58.109060 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 22:57:58.111035 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 22:57:58.112038 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 22:57:58.114338 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 22:57:58.117612 jq[1568]: false Dec 13 22:57:58.117672 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 22:57:58.119734 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 22:57:58.126532 extend-filesystems[1569]: Found /dev/vda6 Dec 13 22:57:58.128905 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 22:57:58.130196 extend-filesystems[1569]: Found /dev/vda9 Dec 13 22:57:58.133426 extend-filesystems[1569]: Checking size of /dev/vda9 Dec 13 22:57:58.134356 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 22:57:58.135365 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 22:57:58.135907 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 22:57:58.137810 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 22:57:58.140069 extend-filesystems[1569]: Resized partition /dev/vda9 Dec 13 22:57:58.139591 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 22:57:58.144199 extend-filesystems[1593]: resize2fs 1.47.3 (8-Jul-2025) Dec 13 22:57:58.146731 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 22:57:58.148773 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 22:57:58.149083 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 22:57:58.149445 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 22:57:58.149745 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 22:57:58.153572 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Dec 13 22:57:58.154057 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 22:57:58.155732 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 22:57:58.156848 jq[1592]: true Dec 13 22:57:58.169668 update_engine[1588]: I20251213 22:57:58.168941 1588 main.cc:92] Flatcar Update Engine starting Dec 13 22:57:58.177272 jq[1607]: true Dec 13 22:57:58.186776 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Dec 13 22:57:58.200787 extend-filesystems[1593]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 22:57:58.200787 extend-filesystems[1593]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 22:57:58.200787 extend-filesystems[1593]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Dec 13 22:57:58.201145 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 22:57:58.210711 tar[1598]: linux-arm64/LICENSE Dec 13 22:57:58.210907 extend-filesystems[1569]: Resized filesystem in /dev/vda9 Dec 13 22:57:58.201940 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 22:57:58.211737 tar[1598]: linux-arm64/helm Dec 13 22:57:58.224269 dbus-daemon[1566]: [system] SELinux support is enabled Dec 13 22:57:58.224529 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 22:57:58.228764 update_engine[1588]: I20251213 22:57:58.228435 1588 update_check_scheduler.cc:74] Next update check in 7m39s Dec 13 22:57:58.234174 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 22:57:58.235329 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 22:57:58.237224 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 22:57:58.237251 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 22:57:58.239257 systemd[1]: Started update-engine.service - Update Engine. Dec 13 22:57:58.243801 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 22:57:58.257011 bash[1638]: Updated "/home/core/.ssh/authorized_keys" Dec 13 22:57:58.258456 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 22:57:58.260298 systemd-logind[1585]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 22:57:58.262645 systemd-logind[1585]: New seat seat0. Dec 13 22:57:58.263089 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 22:57:58.267015 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 22:57:58.305842 locksmithd[1637]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 22:57:58.359147 containerd[1608]: time="2025-12-13T22:57:58Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 13 22:57:58.361799 containerd[1608]: time="2025-12-13T22:57:58.361670317Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 13 22:57:58.373592 containerd[1608]: time="2025-12-13T22:57:58.373527197Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.96µs" Dec 13 22:57:58.373592 containerd[1608]: time="2025-12-13T22:57:58.373585037Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 13 22:57:58.373716 containerd[1608]: time="2025-12-13T22:57:58.373635077Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 13 22:57:58.373716 containerd[1608]: time="2025-12-13T22:57:58.373647957Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 13 22:57:58.373843 containerd[1608]: time="2025-12-13T22:57:58.373821997Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 13 22:57:58.373888 containerd[1608]: time="2025-12-13T22:57:58.373846357Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 13 22:57:58.373935 containerd[1608]: time="2025-12-13T22:57:58.373916797Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 13 22:57:58.373935 containerd[1608]: time="2025-12-13T22:57:58.373932077Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 13 22:57:58.374227 containerd[1608]: time="2025-12-13T22:57:58.374203877Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 13 22:57:58.374227 containerd[1608]: time="2025-12-13T22:57:58.374224637Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 13 22:57:58.374275 containerd[1608]: time="2025-12-13T22:57:58.374236517Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 13 22:57:58.374275 containerd[1608]: time="2025-12-13T22:57:58.374245877Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 13 22:57:58.374443 containerd[1608]: time="2025-12-13T22:57:58.374421957Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 13 22:57:58.374521 containerd[1608]: time="2025-12-13T22:57:58.374504997Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 13 22:57:58.374728 containerd[1608]: time="2025-12-13T22:57:58.374708597Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 13 22:57:58.374754 containerd[1608]: time="2025-12-13T22:57:58.374743317Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 13 22:57:58.374773 containerd[1608]: time="2025-12-13T22:57:58.374755677Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 13 22:57:58.375165 containerd[1608]: time="2025-12-13T22:57:58.375138797Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 13 22:57:58.378435 containerd[1608]: time="2025-12-13T22:57:58.378398197Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 13 22:57:58.378562 containerd[1608]: time="2025-12-13T22:57:58.378536837Z" level=info msg="metadata content store policy set" policy=shared Dec 13 22:57:58.383733 containerd[1608]: time="2025-12-13T22:57:58.383688237Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 13 22:57:58.383792 containerd[1608]: time="2025-12-13T22:57:58.383774757Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 13 22:57:58.384085 containerd[1608]: time="2025-12-13T22:57:58.384055517Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 13 22:57:58.384144 containerd[1608]: time="2025-12-13T22:57:58.384083997Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 13 22:57:58.384172 containerd[1608]: time="2025-12-13T22:57:58.384150757Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 13 22:57:58.384172 containerd[1608]: time="2025-12-13T22:57:58.384165517Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 13 22:57:58.384205 containerd[1608]: time="2025-12-13T22:57:58.384178317Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 13 22:57:58.384205 containerd[1608]: time="2025-12-13T22:57:58.384188797Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 13 22:57:58.384261 containerd[1608]: time="2025-12-13T22:57:58.384243037Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 13 22:57:58.384289 containerd[1608]: time="2025-12-13T22:57:58.384266197Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 13 22:57:58.384289 containerd[1608]: time="2025-12-13T22:57:58.384280597Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 13 22:57:58.384329 containerd[1608]: time="2025-12-13T22:57:58.384291077Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 13 22:57:58.384329 containerd[1608]: time="2025-12-13T22:57:58.384301317Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 13 22:57:58.384329 containerd[1608]: time="2025-12-13T22:57:58.384315677Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 13 22:57:58.384994 containerd[1608]: time="2025-12-13T22:57:58.384967197Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 13 22:57:58.385019 containerd[1608]: time="2025-12-13T22:57:58.385004957Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 13 22:57:58.385102 containerd[1608]: time="2025-12-13T22:57:58.385083357Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 13 22:57:58.385126 containerd[1608]: time="2025-12-13T22:57:58.385105117Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 13 22:57:58.385204 containerd[1608]: time="2025-12-13T22:57:58.385124997Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 13 22:57:58.385204 containerd[1608]: time="2025-12-13T22:57:58.385135357Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 13 22:57:58.385204 containerd[1608]: time="2025-12-13T22:57:58.385149277Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 13 22:57:58.385204 containerd[1608]: time="2025-12-13T22:57:58.385164877Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 13 22:57:58.385204 containerd[1608]: time="2025-12-13T22:57:58.385177517Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 13 22:57:58.385204 containerd[1608]: time="2025-12-13T22:57:58.385189637Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 13 22:57:58.385204 containerd[1608]: time="2025-12-13T22:57:58.385201477Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 13 22:57:58.385319 containerd[1608]: time="2025-12-13T22:57:58.385235757Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 13 22:57:58.385754 containerd[1608]: time="2025-12-13T22:57:58.385723717Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 13 22:57:58.385781 containerd[1608]: time="2025-12-13T22:57:58.385761797Z" level=info msg="Start snapshots syncer" Dec 13 22:57:58.386109 containerd[1608]: time="2025-12-13T22:57:58.386083837Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 13 22:57:58.386773 containerd[1608]: time="2025-12-13T22:57:58.386730517Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 13 22:57:58.386881 containerd[1608]: time="2025-12-13T22:57:58.386791477Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 13 22:57:58.386922 containerd[1608]: time="2025-12-13T22:57:58.386901797Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 13 22:57:58.387298 containerd[1608]: time="2025-12-13T22:57:58.387271077Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 13 22:57:58.387322 containerd[1608]: time="2025-12-13T22:57:58.387312597Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 13 22:57:58.387340 containerd[1608]: time="2025-12-13T22:57:58.387327397Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 13 22:57:58.387358 containerd[1608]: time="2025-12-13T22:57:58.387338157Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 13 22:57:58.387358 containerd[1608]: time="2025-12-13T22:57:58.387351797Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 13 22:57:58.387398 containerd[1608]: time="2025-12-13T22:57:58.387363957Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 13 22:57:58.387398 containerd[1608]: time="2025-12-13T22:57:58.387375837Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 13 22:57:58.387398 containerd[1608]: time="2025-12-13T22:57:58.387386437Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 13 22:57:58.387445 containerd[1608]: time="2025-12-13T22:57:58.387397837Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 13 22:57:58.387840 containerd[1608]: time="2025-12-13T22:57:58.387799517Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 13 22:57:58.387878 containerd[1608]: time="2025-12-13T22:57:58.387846037Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 13 22:57:58.387878 containerd[1608]: time="2025-12-13T22:57:58.387857437Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 13 22:57:58.387929 containerd[1608]: time="2025-12-13T22:57:58.387867677Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 13 22:57:58.387954 containerd[1608]: time="2025-12-13T22:57:58.387927557Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 13 22:57:58.387954 containerd[1608]: time="2025-12-13T22:57:58.387942717Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 13 22:57:58.387988 containerd[1608]: time="2025-12-13T22:57:58.387953797Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 13 22:57:58.388089 containerd[1608]: time="2025-12-13T22:57:58.388071557Z" level=info msg="runtime interface created" Dec 13 22:57:58.388089 containerd[1608]: time="2025-12-13T22:57:58.388085837Z" level=info msg="created NRI interface" Dec 13 22:57:58.388136 containerd[1608]: time="2025-12-13T22:57:58.388097037Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 13 22:57:58.388136 containerd[1608]: time="2025-12-13T22:57:58.388111837Z" level=info msg="Connect containerd service" Dec 13 22:57:58.388170 containerd[1608]: time="2025-12-13T22:57:58.388143277Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 22:57:58.390311 containerd[1608]: time="2025-12-13T22:57:58.390279477Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 22:57:58.486628 containerd[1608]: time="2025-12-13T22:57:58.486493317Z" level=info msg="Start subscribing containerd event" Dec 13 22:57:58.486628 containerd[1608]: time="2025-12-13T22:57:58.486584517Z" level=info msg="Start recovering state" Dec 13 22:57:58.487073 containerd[1608]: time="2025-12-13T22:57:58.487017757Z" level=info msg="Start event monitor" Dec 13 22:57:58.487073 containerd[1608]: time="2025-12-13T22:57:58.487042797Z" level=info msg="Start cni network conf syncer for default" Dec 13 22:57:58.487073 containerd[1608]: time="2025-12-13T22:57:58.487053877Z" level=info msg="Start streaming server" Dec 13 22:57:58.487304 containerd[1608]: time="2025-12-13T22:57:58.487213317Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 13 22:57:58.487304 containerd[1608]: time="2025-12-13T22:57:58.487231677Z" level=info msg="runtime interface starting up..." Dec 13 22:57:58.487304 containerd[1608]: time="2025-12-13T22:57:58.487240077Z" level=info msg="starting plugins..." Dec 13 22:57:58.487304 containerd[1608]: time="2025-12-13T22:57:58.487257837Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 13 22:57:58.487741 containerd[1608]: time="2025-12-13T22:57:58.487711597Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 22:57:58.487784 containerd[1608]: time="2025-12-13T22:57:58.487769637Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 22:57:58.488702 containerd[1608]: time="2025-12-13T22:57:58.488671317Z" level=info msg="containerd successfully booted in 0.129992s" Dec 13 22:57:58.488852 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 22:57:58.514468 tar[1598]: linux-arm64/README.md Dec 13 22:57:58.533860 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 22:57:58.665747 systemd-networkd[1290]: eth0: Gained IPv6LL Dec 13 22:57:58.668109 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 22:57:58.670160 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 22:57:58.673001 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 22:57:58.676462 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 22:57:58.685822 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 22:57:58.713211 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 22:57:58.715042 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 22:57:58.716625 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 22:57:58.718739 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 22:57:59.238519 sshd_keygen[1597]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 22:57:59.261628 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 22:57:59.264330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 22:57:59.268391 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 22:57:59.278913 (kubelet)[1695]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 22:57:59.284589 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 22:57:59.284881 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 22:57:59.289597 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 22:57:59.313651 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 22:57:59.319228 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 22:57:59.321747 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 13 22:57:59.323081 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 22:57:59.324180 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 22:57:59.325488 systemd[1]: Startup finished in 1.447s (kernel) + 7.232s (initrd) + 3.660s (userspace) = 12.339s. Dec 13 22:57:59.622904 kubelet[1695]: E1213 22:57:59.622833 1695 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 22:57:59.625424 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 22:57:59.625570 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 22:57:59.625934 systemd[1]: kubelet.service: Consumed 757ms CPU time, 256.8M memory peak. Dec 13 22:58:00.401073 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 22:58:00.402395 systemd[1]: Started sshd@0-10.0.0.10:22-10.0.0.1:33672.service - OpenSSH per-connection server daemon (10.0.0.1:33672). Dec 13 22:58:00.469076 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 33672 ssh2: RSA SHA256:wrASvn4TPBLeGSBdJR0bjeHJhgtBBrNwNgMNeW/n+/Q Dec 13 22:58:00.472911 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 22:58:00.480383 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 22:58:00.481634 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 22:58:00.486766 systemd-logind[1585]: New session 1 of user core. Dec 13 22:58:00.513666 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 22:58:00.516810 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 22:58:00.535693 (systemd)[1723]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Dec 13 22:58:00.538172 systemd-logind[1585]: New session 2 of user core. Dec 13 22:58:00.659135 systemd[1723]: Queued start job for default target default.target. Dec 13 22:58:00.667498 systemd[1723]: Created slice app.slice - User Application Slice. Dec 13 22:58:00.667532 systemd[1723]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 13 22:58:00.667544 systemd[1723]: Reached target paths.target - Paths. Dec 13 22:58:00.667622 systemd[1723]: Reached target timers.target - Timers. Dec 13 22:58:00.668873 systemd[1723]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 22:58:00.669661 systemd[1723]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 13 22:58:00.683000 systemd[1723]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 13 22:58:00.684365 systemd[1723]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 22:58:00.685418 systemd[1723]: Reached target sockets.target - Sockets. Dec 13 22:58:00.685694 systemd[1723]: Reached target basic.target - Basic System. Dec 13 22:58:00.685756 systemd[1723]: Reached target default.target - Main User Target. Dec 13 22:58:00.685783 systemd[1723]: Startup finished in 142ms. Dec 13 22:58:00.685979 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 22:58:00.689921 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 22:58:00.708810 systemd[1]: Started sshd@1-10.0.0.10:22-10.0.0.1:33676.service - OpenSSH per-connection server daemon (10.0.0.1:33676). Dec 13 22:58:00.759607 sshd[1737]: Accepted publickey for core from 10.0.0.1 port 33676 ssh2: RSA SHA256:wrASvn4TPBLeGSBdJR0bjeHJhgtBBrNwNgMNeW/n+/Q Dec 13 22:58:00.761012 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 22:58:00.767983 systemd-logind[1585]: New session 3 of user core. Dec 13 22:58:00.773806 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 22:58:00.790233 sshd[1741]: Connection closed by 10.0.0.1 port 33676 Dec 13 22:58:00.790405 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Dec 13 22:58:00.801381 systemd[1]: sshd@1-10.0.0.10:22-10.0.0.1:33676.service: Deactivated successfully. Dec 13 22:58:00.802999 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 22:58:00.803713 systemd-logind[1585]: Session 3 logged out. Waiting for processes to exit. Dec 13 22:58:00.805953 systemd[1]: Started sshd@2-10.0.0.10:22-10.0.0.1:33682.service - OpenSSH per-connection server daemon (10.0.0.1:33682). Dec 13 22:58:00.807652 systemd-logind[1585]: Removed session 3. Dec 13 22:58:00.865843 sshd[1747]: Accepted publickey for core from 10.0.0.1 port 33682 ssh2: RSA SHA256:wrASvn4TPBLeGSBdJR0bjeHJhgtBBrNwNgMNeW/n+/Q Dec 13 22:58:00.864153 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 22:58:00.870119 systemd-logind[1585]: New session 4 of user core. Dec 13 22:58:00.875865 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 22:58:00.885614 sshd[1751]: Connection closed by 10.0.0.1 port 33682 Dec 13 22:58:00.884116 sshd-session[1747]: pam_unix(sshd:session): session closed for user core Dec 13 22:58:00.895728 systemd[1]: sshd@2-10.0.0.10:22-10.0.0.1:33682.service: Deactivated successfully. Dec 13 22:58:00.897925 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 22:58:00.898603 systemd-logind[1585]: Session 4 logged out. Waiting for processes to exit. Dec 13 22:58:00.902649 systemd[1]: Started sshd@3-10.0.0.10:22-10.0.0.1:44756.service - OpenSSH per-connection server daemon (10.0.0.1:44756). Dec 13 22:58:00.903310 systemd-logind[1585]: Removed session 4. Dec 13 22:58:00.962165 sshd[1757]: Accepted publickey for core from 10.0.0.1 port 44756 ssh2: RSA SHA256:wrASvn4TPBLeGSBdJR0bjeHJhgtBBrNwNgMNeW/n+/Q Dec 13 22:58:00.964166 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 22:58:00.970423 systemd-logind[1585]: New session 5 of user core. Dec 13 22:58:00.984777 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 22:58:01.000267 sshd[1761]: Connection closed by 10.0.0.1 port 44756 Dec 13 22:58:01.000739 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Dec 13 22:58:01.011978 systemd[1]: sshd@3-10.0.0.10:22-10.0.0.1:44756.service: Deactivated successfully. Dec 13 22:58:01.014027 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 22:58:01.016666 systemd-logind[1585]: Session 5 logged out. Waiting for processes to exit. Dec 13 22:58:01.018481 systemd[1]: Started sshd@4-10.0.0.10:22-10.0.0.1:44760.service - OpenSSH per-connection server daemon (10.0.0.1:44760). Dec 13 22:58:01.019265 systemd-logind[1585]: Removed session 5. Dec 13 22:58:01.075009 sshd[1767]: Accepted publickey for core from 10.0.0.1 port 44760 ssh2: RSA SHA256:wrASvn4TPBLeGSBdJR0bjeHJhgtBBrNwNgMNeW/n+/Q Dec 13 22:58:01.076357 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 22:58:01.080629 systemd-logind[1585]: New session 6 of user core. Dec 13 22:58:01.093821 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 22:58:01.112195 sudo[1772]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 22:58:01.112492 sudo[1772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 22:58:01.125515 sudo[1772]: pam_unix(sudo:session): session closed for user root Dec 13 22:58:01.127628 sshd[1771]: Connection closed by 10.0.0.1 port 44760 Dec 13 22:58:01.127357 sshd-session[1767]: pam_unix(sshd:session): session closed for user core Dec 13 22:58:01.135712 systemd[1]: sshd@4-10.0.0.10:22-10.0.0.1:44760.service: Deactivated successfully. Dec 13 22:58:01.137328 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 22:58:01.138154 systemd-logind[1585]: Session 6 logged out. Waiting for processes to exit. Dec 13 22:58:01.140387 systemd[1]: Started sshd@5-10.0.0.10:22-10.0.0.1:44762.service - OpenSSH per-connection server daemon (10.0.0.1:44762). Dec 13 22:58:01.141403 systemd-logind[1585]: Removed session 6. Dec 13 22:58:01.216394 sshd[1779]: Accepted publickey for core from 10.0.0.1 port 44762 ssh2: RSA SHA256:wrASvn4TPBLeGSBdJR0bjeHJhgtBBrNwNgMNeW/n+/Q Dec 13 22:58:01.217786 sshd-session[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 22:58:01.222200 systemd-logind[1585]: New session 7 of user core. Dec 13 22:58:01.231744 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 22:58:01.244189 sudo[1785]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 22:58:01.244462 sudo[1785]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 22:58:01.247775 sudo[1785]: pam_unix(sudo:session): session closed for user root Dec 13 22:58:01.253945 sudo[1784]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 13 22:58:01.254196 sudo[1784]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 22:58:01.261443 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 22:58:01.304351 kernel: kauditd_printk_skb: 189 callbacks suppressed Dec 13 22:58:01.310844 kernel: audit: type=1305 audit(1765666681.298:228): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 13 22:58:01.313693 kernel: audit: type=1300 audit(1765666681.298:228): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffea475fe0 a2=420 a3=0 items=0 ppid=1790 pid=1809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:01.318826 kernel: audit: type=1327 audit(1765666681.298:228): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 22:58:01.322716 kernel: audit: type=1130 audit(1765666681.309:229): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:58:01.322768 kernel: audit: type=1131 audit(1765666681.309:230): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:58:01.322810 kernel: audit: type=1106 audit(1765666681.319:231): pid=1784 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 22:58:01.298000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 13 22:58:01.298000 audit[1809]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffea475fe0 a2=420 a3=0 items=0 ppid=1790 pid=1809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:01.298000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 22:58:01.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:58:01.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:58:01.319000 audit[1784]: USER_END pid=1784 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 22:58:01.323015 augenrules[1809]: No rules Dec 13 22:58:01.305117 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 22:58:01.316270 sudo[1784]: pam_unix(sudo:session): session closed for user root Dec 13 22:58:01.308015 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 22:58:01.323470 sshd[1783]: Connection closed by 10.0.0.1 port 44762 Dec 13 22:58:01.323894 sshd-session[1779]: pam_unix(sshd:session): session closed for user core Dec 13 22:58:01.326572 kernel: audit: type=1104 audit(1765666681.319:232): pid=1784 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 22:58:01.319000 audit[1784]: CRED_DISP pid=1784 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 22:58:01.323000 audit[1779]: USER_END pid=1779 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:58:01.332196 kernel: audit: type=1106 audit(1765666681.323:233): pid=1779 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:58:01.332226 kernel: audit: type=1104 audit(1765666681.323:234): pid=1779 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:58:01.323000 audit[1779]: CRED_DISP pid=1779 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:58:01.337230 systemd[1]: sshd@5-10.0.0.10:22-10.0.0.1:44762.service: Deactivated successfully. Dec 13 22:58:01.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.10:22-10.0.0.1:44762 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:58:01.341016 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 22:58:01.341804 systemd-logind[1585]: Session 7 logged out. Waiting for processes to exit. Dec 13 22:58:01.342621 kernel: audit: type=1131 audit(1765666681.337:235): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.10:22-10.0.0.1:44762 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:58:01.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.10:22-10.0.0.1:44764 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:58:01.344123 systemd[1]: Started sshd@6-10.0.0.10:22-10.0.0.1:44764.service - OpenSSH per-connection server daemon (10.0.0.1:44764). Dec 13 22:58:01.344652 systemd-logind[1585]: Removed session 7. Dec 13 22:58:01.401000 audit[1818]: USER_ACCT pid=1818 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:58:01.402755 sshd[1818]: Accepted publickey for core from 10.0.0.1 port 44764 ssh2: RSA SHA256:wrASvn4TPBLeGSBdJR0bjeHJhgtBBrNwNgMNeW/n+/Q Dec 13 22:58:01.402000 audit[1818]: CRED_ACQ pid=1818 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:58:01.402000 audit[1818]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffbe247b0 a2=3 a3=0 items=0 ppid=1 pid=1818 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:01.402000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 22:58:01.404050 sshd-session[1818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 22:58:01.408549 systemd-logind[1585]: New session 8 of user core. Dec 13 22:58:01.415812 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 22:58:01.417000 audit[1818]: USER_START pid=1818 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:58:01.418000 audit[1822]: CRED_ACQ pid=1822 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:58:01.428533 sudo[1823]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 22:58:01.428830 sudo[1823]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 22:58:01.426000 audit[1823]: USER_ACCT pid=1823 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 22:58:01.427000 audit[1823]: CRED_REFR pid=1823 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 22:58:01.427000 audit[1823]: USER_START pid=1823 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 22:58:01.710067 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 22:58:01.726854 (dockerd)[1846]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 22:58:01.962744 dockerd[1846]: time="2025-12-13T22:58:01.962578917Z" level=info msg="Starting up" Dec 13 22:58:01.965174 dockerd[1846]: time="2025-12-13T22:58:01.965098197Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 13 22:58:01.975545 dockerd[1846]: time="2025-12-13T22:58:01.975501557Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 13 22:58:02.185068 systemd[1]: var-lib-docker-metacopy\x2dcheck207970409-merged.mount: Deactivated successfully. Dec 13 22:58:02.195083 dockerd[1846]: time="2025-12-13T22:58:02.195001277Z" level=info msg="Loading containers: start." Dec 13 22:58:02.204585 kernel: Initializing XFRM netlink socket Dec 13 22:58:02.245000 audit[1901]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1901 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:02.245000 audit[1901]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=fffffe3e4e40 a2=0 a3=0 items=0 ppid=1846 pid=1901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:02.245000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 13 22:58:02.247000 audit[1903]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1903 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:02.247000 audit[1903]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffe48c33c0 a2=0 a3=0 items=0 ppid=1846 pid=1903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:02.247000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 13 22:58:02.248000 audit[1905]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1905 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:02.248000 audit[1905]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffed1fcf30 a2=0 a3=0 items=0 ppid=1846 pid=1905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:02.248000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 13 22:58:02.250000 audit[1907]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1907 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:02.250000 audit[1907]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff6204800 a2=0 a3=0 items=0 ppid=1846 pid=1907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:02.250000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 13 22:58:02.252000 audit[1909]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1909 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:02.252000 audit[1909]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffe54bd60 a2=0 a3=0 items=0 ppid=1846 pid=1909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:02.252000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 13 22:58:02.254000 audit[1911]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1911 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:02.254000 audit[1911]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff9003ff0 a2=0 a3=0 items=0 ppid=1846 pid=1911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:02.254000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 22:58:02.256000 audit[1913]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1913 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:02.256000 audit[1913]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc6728d00 a2=0 a3=0 items=0 ppid=1846 pid=1913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:02.256000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 22:58:02.258000 audit[1915]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1915 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:02.258000 audit[1915]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=fffff835a9e0 a2=0 a3=0 items=0 ppid=1846 pid=1915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:02.258000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 13 22:58:02.285000 audit[1918]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1918 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:02.285000 audit[1918]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffc9660b80 a2=0 a3=0 items=0 ppid=1846 pid=1918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:02.285000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 13 22:58:02.287000 audit[1920]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1920 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:02.287000 audit[1920]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffdba1a9b0 a2=0 a3=0 items=0 ppid=1846 pid=1920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:02.287000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 13 22:58:02.289000 audit[1922]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1922 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:02.289000 audit[1922]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=fffff3507830 a2=0 a3=0 items=0 ppid=1846 pid=1922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:02.289000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 13 22:58:02.291000 audit[1924]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1924 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:02.291000 audit[1924]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffd4840290 a2=0 a3=0 items=0 ppid=1846 pid=1924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:02.291000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 22:58:02.293000 audit[1926]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1926 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:02.293000 audit[1926]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffe72556c0 a2=0 a3=0 items=0 ppid=1846 pid=1926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:02.293000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 13 22:58:02.325000 audit[1956]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1956 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:02.325000 audit[1956]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffc75fa410 a2=0 a3=0 items=0 ppid=1846 pid=1956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:02.325000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 13 22:58:02.327000 audit[1958]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1958 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:02.327000 audit[1958]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffe42886a0 a2=0 a3=0 items=0 ppid=1846 pid=1958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:02.327000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 13 22:58:02.329000 audit[1960]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1960 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:02.329000 audit[1960]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffee190a00 a2=0 a3=0 items=0 ppid=1846 pid=1960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:02.329000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 13 22:58:02.331000 audit[1962]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1962 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:02.331000 audit[1962]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff4028cb0 a2=0 a3=0 items=0 ppid=1846 pid=1962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:02.331000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 13 22:58:02.332000 audit[1964]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1964 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:02.332000 audit[1964]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe90b5d30 a2=0 a3=0 items=0 ppid=1846 pid=1964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:02.332000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 13 22:58:02.334000 audit[1966]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1966 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:02.334000 audit[1966]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc3993070 a2=0 a3=0 items=0 ppid=1846 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:02.334000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 22:58:02.336000 audit[1968]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1968 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:02.336000 audit[1968]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffffc9d8ff0 a2=0 a3=0 items=0 ppid=1846 pid=1968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:02.336000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 22:58:02.338000 audit[1970]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1970 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:02.338000 audit[1970]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=fffff8e2ea50 a2=0 a3=0 items=0 ppid=1846 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:02.338000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 13 22:58:02.340000 audit[1972]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1972 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:02.340000 audit[1972]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffe82bb1c0 a2=0 a3=0 items=0 ppid=1846 pid=1972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:02.340000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 13 22:58:02.342000 audit[1974]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1974 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:02.342000 audit[1974]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffdee640d0 a2=0 a3=0 items=0 ppid=1846 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:02.342000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 13 22:58:02.344000 audit[1976]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1976 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:02.344000 audit[1976]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffe1a094d0 a2=0 a3=0 items=0 ppid=1846 pid=1976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:02.344000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 13 22:58:02.346000 audit[1978]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1978 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:02.346000 audit[1978]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffe5b1fce0 a2=0 a3=0 items=0 ppid=1846 pid=1978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:02.346000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 22:58:02.348000 audit[1980]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1980 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:02.348000 audit[1980]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffd9a09d90 a2=0 a3=0 items=0 ppid=1846 pid=1980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:02.348000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 13 22:58:02.353000 audit[1985]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1985 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:02.353000 audit[1985]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe796f680 a2=0 a3=0 items=0 ppid=1846 pid=1985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:02.353000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 13 22:58:02.355000 audit[1987]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1987 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:02.355000 audit[1987]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffc7bbcfb0 a2=0 a3=0 items=0 ppid=1846 pid=1987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:02.355000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 13 22:58:02.357000 audit[1989]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1989 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:02.357000 audit[1989]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffff2013800 a2=0 a3=0 items=0 ppid=1846 pid=1989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:02.357000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 22:58:02.358000 audit[1991]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=1991 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:02.358000 audit[1991]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff7b4b010 a2=0 a3=0 items=0 ppid=1846 pid=1991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:02.358000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 13 22:58:02.360000 audit[1993]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=1993 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:02.360000 audit[1993]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=fffff26826f0 a2=0 a3=0 items=0 ppid=1846 pid=1993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:02.360000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 13 22:58:02.362000 audit[1995]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=1995 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:02.362000 audit[1995]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffe431ab40 a2=0 a3=0 items=0 ppid=1846 pid=1995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:02.362000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 22:58:02.378000 audit[1999]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=1999 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:02.378000 audit[1999]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=ffffe1d75640 a2=0 a3=0 items=0 ppid=1846 pid=1999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:02.378000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 13 22:58:02.380000 audit[2001]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2001 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:02.380000 audit[2001]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffc8737220 a2=0 a3=0 items=0 ppid=1846 pid=2001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:02.380000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 13 22:58:02.388000 audit[2009]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2009 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:02.388000 audit[2009]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffc02acc20 a2=0 a3=0 items=0 ppid=1846 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:02.388000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 13 22:58:02.396000 audit[2015]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2015 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:02.396000 audit[2015]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffcbbe84a0 a2=0 a3=0 items=0 ppid=1846 pid=2015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:02.396000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 13 22:58:02.398000 audit[2017]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2017 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:02.398000 audit[2017]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=ffffdb18ca70 a2=0 a3=0 items=0 ppid=1846 pid=2017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:02.398000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 13 22:58:02.401000 audit[2019]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2019 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:02.401000 audit[2019]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffdf771400 a2=0 a3=0 items=0 ppid=1846 pid=2019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:02.401000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 13 22:58:02.402000 audit[2021]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:02.402000 audit[2021]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffeaa14e20 a2=0 a3=0 items=0 ppid=1846 pid=2021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:02.402000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 22:58:02.404000 audit[2023]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:02.404000 audit[2023]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe1e98ee0 a2=0 a3=0 items=0 ppid=1846 pid=2023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:02.404000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 13 22:58:02.407043 systemd-networkd[1290]: docker0: Link UP Dec 13 22:58:02.410105 dockerd[1846]: time="2025-12-13T22:58:02.410053477Z" level=info msg="Loading containers: done." Dec 13 22:58:02.424194 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2383934694-merged.mount: Deactivated successfully. Dec 13 22:58:02.430413 dockerd[1846]: time="2025-12-13T22:58:02.430119797Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 22:58:02.430413 dockerd[1846]: time="2025-12-13T22:58:02.430222957Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 13 22:58:02.431162 dockerd[1846]: time="2025-12-13T22:58:02.431137357Z" level=info msg="Initializing buildkit" Dec 13 22:58:02.453943 dockerd[1846]: time="2025-12-13T22:58:02.453903997Z" level=info msg="Completed buildkit initialization" Dec 13 22:58:02.460323 dockerd[1846]: time="2025-12-13T22:58:02.460258317Z" level=info msg="Daemon has completed initialization" Dec 13 22:58:02.460448 dockerd[1846]: time="2025-12-13T22:58:02.460339597Z" level=info msg="API listen on /run/docker.sock" Dec 13 22:58:02.460594 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 22:58:02.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:58:02.991911 containerd[1608]: time="2025-12-13T22:58:02.991852677Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 13 22:58:03.568330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1298234534.mount: Deactivated successfully. Dec 13 22:58:04.436843 containerd[1608]: time="2025-12-13T22:58:04.436774477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 22:58:04.437334 containerd[1608]: time="2025-12-13T22:58:04.437269397Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=24867006" Dec 13 22:58:04.438221 containerd[1608]: time="2025-12-13T22:58:04.438185517Z" level=info msg="ImageCreate event name:\"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 22:58:04.440395 containerd[1608]: time="2025-12-13T22:58:04.440356797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 22:58:04.442078 containerd[1608]: time="2025-12-13T22:58:04.442054197Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"26428558\" in 1.45015224s" Dec 13 22:58:04.442127 containerd[1608]: time="2025-12-13T22:58:04.442085397Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\"" Dec 13 22:58:04.442728 containerd[1608]: time="2025-12-13T22:58:04.442707517Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 13 22:58:05.514462 containerd[1608]: time="2025-12-13T22:58:05.514388237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 22:58:05.515001 containerd[1608]: time="2025-12-13T22:58:05.514952557Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=22610801" Dec 13 22:58:05.515913 containerd[1608]: time="2025-12-13T22:58:05.515872957Z" level=info msg="ImageCreate event name:\"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 22:58:05.518860 containerd[1608]: time="2025-12-13T22:58:05.518827517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 22:58:05.520516 containerd[1608]: time="2025-12-13T22:58:05.520473557Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"24203439\" in 1.07773508s" Dec 13 22:58:05.520563 containerd[1608]: time="2025-12-13T22:58:05.520518317Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\"" Dec 13 22:58:05.521011 containerd[1608]: time="2025-12-13T22:58:05.520989437Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 13 22:58:06.854100 containerd[1608]: time="2025-12-13T22:58:06.854049037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 22:58:06.855034 containerd[1608]: time="2025-12-13T22:58:06.854987757Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=17610300" Dec 13 22:58:06.856146 containerd[1608]: time="2025-12-13T22:58:06.855725597Z" level=info msg="ImageCreate event name:\"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 22:58:06.858206 containerd[1608]: time="2025-12-13T22:58:06.858175597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 22:58:06.859580 containerd[1608]: time="2025-12-13T22:58:06.859224797Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"19202938\" in 1.33820404s" Dec 13 22:58:06.859580 containerd[1608]: time="2025-12-13T22:58:06.859270597Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\"" Dec 13 22:58:06.859721 containerd[1608]: time="2025-12-13T22:58:06.859695277Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 13 22:58:07.816829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1920963099.mount: Deactivated successfully. Dec 13 22:58:08.048041 containerd[1608]: time="2025-12-13T22:58:08.047994197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 22:58:08.048550 containerd[1608]: time="2025-12-13T22:58:08.048508637Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=27558078" Dec 13 22:58:08.049352 containerd[1608]: time="2025-12-13T22:58:08.049302797Z" level=info msg="ImageCreate event name:\"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 22:58:08.051323 containerd[1608]: time="2025-12-13T22:58:08.051278557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 22:58:08.052085 containerd[1608]: time="2025-12-13T22:58:08.051777797Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"27560818\" in 1.1920514s" Dec 13 22:58:08.052085 containerd[1608]: time="2025-12-13T22:58:08.051806477Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\"" Dec 13 22:58:08.052392 containerd[1608]: time="2025-12-13T22:58:08.052365557Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 13 22:58:08.836503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2352838901.mount: Deactivated successfully. Dec 13 22:58:09.469833 containerd[1608]: time="2025-12-13T22:58:09.469682517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 22:58:09.471080 containerd[1608]: time="2025-12-13T22:58:09.470679397Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=15956282" Dec 13 22:58:09.471607 containerd[1608]: time="2025-12-13T22:58:09.471580997Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 22:58:09.476153 containerd[1608]: time="2025-12-13T22:58:09.476106797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 22:58:09.477633 containerd[1608]: time="2025-12-13T22:58:09.477592237Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.42519384s" Dec 13 22:58:09.477711 containerd[1608]: time="2025-12-13T22:58:09.477647997Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Dec 13 22:58:09.478407 containerd[1608]: time="2025-12-13T22:58:09.478369957Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 13 22:58:09.772171 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 22:58:09.773683 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 22:58:09.910221 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 22:58:09.911300 kernel: kauditd_printk_skb: 132 callbacks suppressed Dec 13 22:58:09.911384 kernel: audit: type=1130 audit(1765666689.908:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:58:09.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:58:09.914112 (kubelet)[2202]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 22:58:09.988688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4287343202.mount: Deactivated successfully. Dec 13 22:58:09.996579 containerd[1608]: time="2025-12-13T22:58:09.994252277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 22:58:09.996579 containerd[1608]: time="2025-12-13T22:58:09.994938837Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 13 22:58:09.996579 containerd[1608]: time="2025-12-13T22:58:09.995741277Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 22:58:09.998001 containerd[1608]: time="2025-12-13T22:58:09.997964357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 22:58:10.001456 containerd[1608]: time="2025-12-13T22:58:10.001427357Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 523.02376ms" Dec 13 22:58:10.001456 containerd[1608]: time="2025-12-13T22:58:10.001456477Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 13 22:58:10.001957 containerd[1608]: time="2025-12-13T22:58:10.001880117Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 13 22:58:10.017785 kubelet[2202]: E1213 22:58:10.017748 2202 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 22:58:10.020972 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 22:58:10.021103 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 22:58:10.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 22:58:10.021437 systemd[1]: kubelet.service: Consumed 147ms CPU time, 106.1M memory peak. Dec 13 22:58:10.024591 kernel: audit: type=1131 audit(1765666690.019:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 22:58:10.520160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2097269405.mount: Deactivated successfully. Dec 13 22:58:12.195584 containerd[1608]: time="2025-12-13T22:58:12.195433077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 22:58:12.196518 containerd[1608]: time="2025-12-13T22:58:12.195939797Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=56456774" Dec 13 22:58:12.196989 containerd[1608]: time="2025-12-13T22:58:12.196961917Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 22:58:12.199858 containerd[1608]: time="2025-12-13T22:58:12.199830917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 22:58:12.201048 containerd[1608]: time="2025-12-13T22:58:12.200995877Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.19908604s" Dec 13 22:58:12.201048 containerd[1608]: time="2025-12-13T22:58:12.201023957Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Dec 13 22:58:16.426516 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 22:58:16.431848 kernel: audit: type=1130 audit(1765666696.425:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:58:16.431885 kernel: audit: type=1131 audit(1765666696.425:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:58:16.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:58:16.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:58:16.426690 systemd[1]: kubelet.service: Consumed 147ms CPU time, 106.1M memory peak. Dec 13 22:58:16.430299 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 22:58:16.458217 systemd[1]: Reload requested from client PID 2298 ('systemctl') (unit session-8.scope)... Dec 13 22:58:16.458236 systemd[1]: Reloading... Dec 13 22:58:16.541668 zram_generator::config[2350]: No configuration found. Dec 13 22:58:16.755835 systemd[1]: Reloading finished in 297 ms. Dec 13 22:58:16.778214 kernel: audit: type=1334 audit(1765666696.773:290): prog-id=67 op=LOAD Dec 13 22:58:16.778308 kernel: audit: type=1334 audit(1765666696.773:291): prog-id=68 op=LOAD Dec 13 22:58:16.778328 kernel: audit: type=1334 audit(1765666696.773:292): prog-id=61 op=UNLOAD Dec 13 22:58:16.778345 kernel: audit: type=1334 audit(1765666696.773:293): prog-id=62 op=UNLOAD Dec 13 22:58:16.773000 audit: BPF prog-id=67 op=LOAD Dec 13 22:58:16.773000 audit: BPF prog-id=68 op=LOAD Dec 13 22:58:16.773000 audit: BPF prog-id=61 op=UNLOAD Dec 13 22:58:16.773000 audit: BPF prog-id=62 op=UNLOAD Dec 13 22:58:16.782893 kernel: audit: type=1334 audit(1765666696.775:294): prog-id=69 op=LOAD Dec 13 22:58:16.782943 kernel: audit: type=1334 audit(1765666696.775:295): prog-id=60 op=UNLOAD Dec 13 22:58:16.782961 kernel: audit: type=1334 audit(1765666696.775:296): prog-id=70 op=LOAD Dec 13 22:58:16.782978 kernel: audit: type=1334 audit(1765666696.775:297): prog-id=53 op=UNLOAD Dec 13 22:58:16.775000 audit: BPF prog-id=69 op=LOAD Dec 13 22:58:16.775000 audit: BPF prog-id=60 op=UNLOAD Dec 13 22:58:16.775000 audit: BPF prog-id=70 op=LOAD Dec 13 22:58:16.775000 audit: BPF prog-id=53 op=UNLOAD Dec 13 22:58:16.776000 audit: BPF prog-id=71 op=LOAD Dec 13 22:58:16.776000 audit: BPF prog-id=72 op=LOAD Dec 13 22:58:16.776000 audit: BPF prog-id=54 op=UNLOAD Dec 13 22:58:16.776000 audit: BPF prog-id=55 op=UNLOAD Dec 13 22:58:16.777000 audit: BPF prog-id=73 op=LOAD Dec 13 22:58:16.777000 audit: BPF prog-id=50 op=UNLOAD Dec 13 22:58:16.777000 audit: BPF prog-id=74 op=LOAD Dec 13 22:58:16.777000 audit: BPF prog-id=75 op=LOAD Dec 13 22:58:16.777000 audit: BPF prog-id=51 op=UNLOAD Dec 13 22:58:16.777000 audit: BPF prog-id=52 op=UNLOAD Dec 13 22:58:16.778000 audit: BPF prog-id=76 op=LOAD Dec 13 22:58:16.778000 audit: BPF prog-id=64 op=UNLOAD Dec 13 22:58:16.778000 audit: BPF prog-id=77 op=LOAD Dec 13 22:58:16.778000 audit: BPF prog-id=78 op=LOAD Dec 13 22:58:16.778000 audit: BPF prog-id=65 op=UNLOAD Dec 13 22:58:16.778000 audit: BPF prog-id=66 op=UNLOAD Dec 13 22:58:16.779000 audit: BPF prog-id=79 op=LOAD Dec 13 22:58:16.779000 audit: BPF prog-id=56 op=UNLOAD Dec 13 22:58:16.780000 audit: BPF prog-id=80 op=LOAD Dec 13 22:58:16.781000 audit: BPF prog-id=81 op=LOAD Dec 13 22:58:16.781000 audit: BPF prog-id=57 op=UNLOAD Dec 13 22:58:16.781000 audit: BPF prog-id=58 op=UNLOAD Dec 13 22:58:16.802000 audit: BPF prog-id=82 op=LOAD Dec 13 22:58:16.802000 audit: BPF prog-id=47 op=UNLOAD Dec 13 22:58:16.802000 audit: BPF prog-id=83 op=LOAD Dec 13 22:58:16.802000 audit: BPF prog-id=84 op=LOAD Dec 13 22:58:16.802000 audit: BPF prog-id=48 op=UNLOAD Dec 13 22:58:16.802000 audit: BPF prog-id=49 op=UNLOAD Dec 13 22:58:16.803000 audit: BPF prog-id=85 op=LOAD Dec 13 22:58:16.803000 audit: BPF prog-id=63 op=UNLOAD Dec 13 22:58:16.803000 audit: BPF prog-id=86 op=LOAD Dec 13 22:58:16.803000 audit: BPF prog-id=59 op=UNLOAD Dec 13 22:58:16.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:58:16.816240 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 22:58:16.819479 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 22:58:16.819779 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 22:58:16.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:58:16.819833 systemd[1]: kubelet.service: Consumed 93ms CPU time, 95.3M memory peak. Dec 13 22:58:16.821195 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 22:58:16.952773 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 22:58:16.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:58:16.956403 (kubelet)[2391]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 22:58:16.989883 kubelet[2391]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 22:58:16.989883 kubelet[2391]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 13 22:58:16.989883 kubelet[2391]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 22:58:16.990207 kubelet[2391]: I1213 22:58:16.989937 2391 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 22:58:17.727107 kubelet[2391]: I1213 22:58:17.727058 2391 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 13 22:58:17.727107 kubelet[2391]: I1213 22:58:17.727091 2391 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 22:58:17.727366 kubelet[2391]: I1213 22:58:17.727350 2391 server.go:954] "Client rotation is on, will bootstrap in background" Dec 13 22:58:17.751856 kubelet[2391]: E1213 22:58:17.751808 2391 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Dec 13 22:58:17.752895 kubelet[2391]: I1213 22:58:17.752801 2391 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 22:58:17.761130 kubelet[2391]: I1213 22:58:17.761101 2391 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 13 22:58:17.764079 kubelet[2391]: I1213 22:58:17.764061 2391 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 22:58:17.764736 kubelet[2391]: I1213 22:58:17.764695 2391 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 22:58:17.764898 kubelet[2391]: I1213 22:58:17.764739 2391 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 22:58:17.764999 kubelet[2391]: I1213 22:58:17.764969 2391 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 22:58:17.764999 kubelet[2391]: I1213 22:58:17.764978 2391 container_manager_linux.go:304] "Creating device plugin manager" Dec 13 22:58:17.765217 kubelet[2391]: I1213 22:58:17.765181 2391 state_mem.go:36] "Initialized new in-memory state store" Dec 13 22:58:17.767527 kubelet[2391]: I1213 22:58:17.767506 2391 kubelet.go:446] "Attempting to sync node with API server" Dec 13 22:58:17.767596 kubelet[2391]: I1213 22:58:17.767530 2391 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 22:58:17.767596 kubelet[2391]: I1213 22:58:17.767566 2391 kubelet.go:352] "Adding apiserver pod source" Dec 13 22:58:17.767596 kubelet[2391]: I1213 22:58:17.767578 2391 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 22:58:17.770891 kubelet[2391]: I1213 22:58:17.770871 2391 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 13 22:58:17.771293 kubelet[2391]: W1213 22:58:17.771253 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Dec 13 22:58:17.771327 kubelet[2391]: E1213 22:58:17.771310 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Dec 13 22:58:17.771448 kubelet[2391]: I1213 22:58:17.771431 2391 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 22:58:17.771611 kubelet[2391]: W1213 22:58:17.771572 2391 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 22:58:17.772127 kubelet[2391]: W1213 22:58:17.772091 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Dec 13 22:58:17.772181 kubelet[2391]: E1213 22:58:17.772140 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Dec 13 22:58:17.772394 kubelet[2391]: I1213 22:58:17.772373 2391 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 13 22:58:17.772428 kubelet[2391]: I1213 22:58:17.772412 2391 server.go:1287] "Started kubelet" Dec 13 22:58:17.772529 kubelet[2391]: I1213 22:58:17.772498 2391 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 22:58:17.774609 kubelet[2391]: I1213 22:58:17.773491 2391 server.go:479] "Adding debug handlers to kubelet server" Dec 13 22:58:17.774609 kubelet[2391]: I1213 22:58:17.774283 2391 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 22:58:17.774609 kubelet[2391]: I1213 22:58:17.774524 2391 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 22:58:17.776247 kubelet[2391]: E1213 22:58:17.775980 2391 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.10:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.10:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1880e881b875e3b5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-13 22:58:17.772393397 +0000 UTC m=+0.813036801,LastTimestamp:2025-12-13 22:58:17.772393397 +0000 UTC m=+0.813036801,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 22:58:17.776247 kubelet[2391]: E1213 22:58:17.776257 2391 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 22:58:17.776516 kubelet[2391]: I1213 22:58:17.776500 2391 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 22:58:17.777076 kubelet[2391]: I1213 22:58:17.776506 2391 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 22:58:17.777172 kubelet[2391]: I1213 22:58:17.777152 2391 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 13 22:58:17.777443 kubelet[2391]: I1213 22:58:17.777419 2391 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 13 22:58:17.777495 kubelet[2391]: I1213 22:58:17.777477 2391 reconciler.go:26] "Reconciler: start to sync state" Dec 13 22:58:17.777821 kubelet[2391]: W1213 22:58:17.777785 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Dec 13 22:58:17.777897 kubelet[2391]: E1213 22:58:17.777826 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Dec 13 22:58:17.778180 kubelet[2391]: E1213 22:58:17.778145 2391 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 22:58:17.778256 kubelet[2391]: I1213 22:58:17.778218 2391 factory.go:221] Registration of the systemd container factory successfully Dec 13 22:58:17.778283 kubelet[2391]: E1213 22:58:17.778218 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="200ms" Dec 13 22:58:17.778335 kubelet[2391]: I1213 22:58:17.778309 2391 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 22:58:17.779337 kubelet[2391]: I1213 22:58:17.779312 2391 factory.go:221] Registration of the containerd container factory successfully Dec 13 22:58:17.780000 audit[2405]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2405 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:17.780000 audit[2405]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffeecfac50 a2=0 a3=0 items=0 ppid=2391 pid=2405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:17.780000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 13 22:58:17.781000 audit[2406]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2406 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:17.781000 audit[2406]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc80757b0 a2=0 a3=0 items=0 ppid=2391 pid=2406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:17.781000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 13 22:58:17.784000 audit[2408]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2408 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:17.784000 audit[2408]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffc97a45c0 a2=0 a3=0 items=0 ppid=2391 pid=2408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:17.784000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 22:58:17.787000 audit[2413]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2413 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:17.787000 audit[2413]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffe879ff10 a2=0 a3=0 items=0 ppid=2391 pid=2413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:17.787000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 22:58:17.791383 kubelet[2391]: I1213 22:58:17.791351 2391 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 13 22:58:17.791383 kubelet[2391]: I1213 22:58:17.791375 2391 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 13 22:58:17.791480 kubelet[2391]: I1213 22:58:17.791395 2391 state_mem.go:36] "Initialized new in-memory state store" Dec 13 22:58:17.796046 kubelet[2391]: I1213 22:58:17.795751 2391 policy_none.go:49] "None policy: Start" Dec 13 22:58:17.796046 kubelet[2391]: I1213 22:58:17.795782 2391 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 13 22:58:17.796046 kubelet[2391]: I1213 22:58:17.795795 2391 state_mem.go:35] "Initializing new in-memory state store" Dec 13 22:58:17.796000 audit[2417]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2417 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:17.796000 audit[2417]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffd5cbf1b0 a2=0 a3=0 items=0 ppid=2391 pid=2417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:17.796000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 13 22:58:17.798918 kubelet[2391]: I1213 22:58:17.798890 2391 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 22:58:17.798000 audit[2418]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2418 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:17.798000 audit[2418]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc65c0530 a2=0 a3=0 items=0 ppid=2391 pid=2418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:17.798000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 13 22:58:17.800063 kubelet[2391]: I1213 22:58:17.800035 2391 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 22:58:17.800063 kubelet[2391]: I1213 22:58:17.800059 2391 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 13 22:58:17.800124 kubelet[2391]: I1213 22:58:17.800077 2391 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 13 22:58:17.800124 kubelet[2391]: I1213 22:58:17.800084 2391 kubelet.go:2382] "Starting kubelet main sync loop" Dec 13 22:58:17.800164 kubelet[2391]: E1213 22:58:17.800126 2391 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 22:58:17.798000 audit[2419]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2419 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:17.798000 audit[2419]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffedb7a0d0 a2=0 a3=0 items=0 ppid=2391 pid=2419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:17.798000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 13 22:58:17.800868 kubelet[2391]: W1213 22:58:17.800817 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Dec 13 22:58:17.800948 kubelet[2391]: E1213 22:58:17.800874 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Dec 13 22:58:17.800000 audit[2422]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2422 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:17.800000 audit[2422]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd3776690 a2=0 a3=0 items=0 ppid=2391 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:17.800000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 13 22:58:17.801000 audit[2421]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2421 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:17.801000 audit[2421]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffebb76f10 a2=0 a3=0 items=0 ppid=2391 pid=2421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:17.801000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 13 22:58:17.804183 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 22:58:17.802000 audit[2424]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2424 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:17.802000 audit[2424]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdb6905a0 a2=0 a3=0 items=0 ppid=2391 pid=2424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:17.802000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 13 22:58:17.803000 audit[2423]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2423 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:17.803000 audit[2423]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffbaa0e30 a2=0 a3=0 items=0 ppid=2391 pid=2423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:17.803000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 13 22:58:17.804000 audit[2425]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2425 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:17.804000 audit[2425]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffeec5de20 a2=0 a3=0 items=0 ppid=2391 pid=2425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:17.804000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 13 22:58:17.819861 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 22:58:17.823018 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 22:58:17.837509 kubelet[2391]: I1213 22:58:17.837439 2391 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 22:58:17.838047 kubelet[2391]: I1213 22:58:17.838032 2391 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 22:58:17.838180 kubelet[2391]: I1213 22:58:17.838048 2391 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 22:58:17.838311 kubelet[2391]: I1213 22:58:17.838292 2391 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 22:58:17.839329 kubelet[2391]: E1213 22:58:17.839305 2391 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 13 22:58:17.839461 kubelet[2391]: E1213 22:58:17.839448 2391 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 22:58:17.909182 systemd[1]: Created slice kubepods-burstable-pod3c32dc4cf8332c4517a24728eaa8e6d1.slice - libcontainer container kubepods-burstable-pod3c32dc4cf8332c4517a24728eaa8e6d1.slice. Dec 13 22:58:17.935061 kubelet[2391]: E1213 22:58:17.935019 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 22:58:17.937256 systemd[1]: Created slice kubepods-burstable-pod55d9ac750f8c9141f337af8b08cf5c9d.slice - libcontainer container kubepods-burstable-pod55d9ac750f8c9141f337af8b08cf5c9d.slice. Dec 13 22:58:17.939090 kubelet[2391]: E1213 22:58:17.939063 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 22:58:17.939217 kubelet[2391]: I1213 22:58:17.939168 2391 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 22:58:17.939650 kubelet[2391]: E1213 22:58:17.939624 2391 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Dec 13 22:58:17.951174 systemd[1]: Created slice kubepods-burstable-pod0a68423804124305a9de061f38780871.slice - libcontainer container kubepods-burstable-pod0a68423804124305a9de061f38780871.slice. Dec 13 22:58:17.952938 kubelet[2391]: E1213 22:58:17.952891 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 22:58:17.979477 kubelet[2391]: E1213 22:58:17.979372 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="400ms" Dec 13 22:58:18.078942 kubelet[2391]: I1213 22:58:18.078895 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3c32dc4cf8332c4517a24728eaa8e6d1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3c32dc4cf8332c4517a24728eaa8e6d1\") " pod="kube-system/kube-apiserver-localhost" Dec 13 22:58:18.078942 kubelet[2391]: I1213 22:58:18.078938 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 22:58:18.079315 kubelet[2391]: I1213 22:58:18.078958 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 22:58:18.079315 kubelet[2391]: I1213 22:58:18.078980 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3c32dc4cf8332c4517a24728eaa8e6d1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3c32dc4cf8332c4517a24728eaa8e6d1\") " pod="kube-system/kube-apiserver-localhost" Dec 13 22:58:18.079315 kubelet[2391]: I1213 22:58:18.078995 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3c32dc4cf8332c4517a24728eaa8e6d1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3c32dc4cf8332c4517a24728eaa8e6d1\") " pod="kube-system/kube-apiserver-localhost" Dec 13 22:58:18.079315 kubelet[2391]: I1213 22:58:18.079011 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 22:58:18.079315 kubelet[2391]: I1213 22:58:18.079045 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 22:58:18.079418 kubelet[2391]: I1213 22:58:18.079087 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 22:58:18.079418 kubelet[2391]: I1213 22:58:18.079112 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a68423804124305a9de061f38780871-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0a68423804124305a9de061f38780871\") " pod="kube-system/kube-scheduler-localhost" Dec 13 22:58:18.141047 kubelet[2391]: I1213 22:58:18.141010 2391 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 22:58:18.141392 kubelet[2391]: E1213 22:58:18.141352 2391 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Dec 13 22:58:18.236074 kubelet[2391]: E1213 22:58:18.235991 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:18.236852 containerd[1608]: time="2025-12-13T22:58:18.236802197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3c32dc4cf8332c4517a24728eaa8e6d1,Namespace:kube-system,Attempt:0,}" Dec 13 22:58:18.240095 kubelet[2391]: E1213 22:58:18.240062 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:18.240744 containerd[1608]: time="2025-12-13T22:58:18.240706997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:55d9ac750f8c9141f337af8b08cf5c9d,Namespace:kube-system,Attempt:0,}" Dec 13 22:58:18.254380 kubelet[2391]: E1213 22:58:18.254083 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:18.255565 containerd[1608]: time="2025-12-13T22:58:18.254716637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0a68423804124305a9de061f38780871,Namespace:kube-system,Attempt:0,}" Dec 13 22:58:18.262146 containerd[1608]: time="2025-12-13T22:58:18.262101477Z" level=info msg="connecting to shim 030e733e7d346008f1aabf1d7e13c50699e74200f7069c78bffa58512f080f4b" address="unix:///run/containerd/s/fb815fefe5804e0b29ebd65c00073eb546c664e892f5d22ee111abeac6c16bcb" namespace=k8s.io protocol=ttrpc version=3 Dec 13 22:58:18.265066 containerd[1608]: time="2025-12-13T22:58:18.265025197Z" level=info msg="connecting to shim e769f0e305b6ca4a2d9cf4ec0d0f64b08defbae79921cfa6e19c57b0641124b2" address="unix:///run/containerd/s/9e644e2cc5cb3559a4cf83ce6e6a789d5425be037b068ddfce678bc00d338f5e" namespace=k8s.io protocol=ttrpc version=3 Dec 13 22:58:18.291297 containerd[1608]: time="2025-12-13T22:58:18.291253077Z" level=info msg="connecting to shim f9aaa1fd98d774b403e0e360ba9583630b92700207105bacbbc918b092082c7d" address="unix:///run/containerd/s/d0cb9c504c3b39c90f7b063981478c27f97dddd69542d1eeabb67b43745bb066" namespace=k8s.io protocol=ttrpc version=3 Dec 13 22:58:18.292812 systemd[1]: Started cri-containerd-030e733e7d346008f1aabf1d7e13c50699e74200f7069c78bffa58512f080f4b.scope - libcontainer container 030e733e7d346008f1aabf1d7e13c50699e74200f7069c78bffa58512f080f4b. Dec 13 22:58:18.295986 systemd[1]: Started cri-containerd-e769f0e305b6ca4a2d9cf4ec0d0f64b08defbae79921cfa6e19c57b0641124b2.scope - libcontainer container e769f0e305b6ca4a2d9cf4ec0d0f64b08defbae79921cfa6e19c57b0641124b2. Dec 13 22:58:18.303000 audit: BPF prog-id=87 op=LOAD Dec 13 22:58:18.303000 audit: BPF prog-id=88 op=LOAD Dec 13 22:58:18.303000 audit[2460]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2440 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033306537333365376433343630303866316161626631643765313363 Dec 13 22:58:18.303000 audit: BPF prog-id=88 op=UNLOAD Dec 13 22:58:18.303000 audit[2460]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2440 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033306537333365376433343630303866316161626631643765313363 Dec 13 22:58:18.303000 audit: BPF prog-id=89 op=LOAD Dec 13 22:58:18.303000 audit[2460]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2440 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033306537333365376433343630303866316161626631643765313363 Dec 13 22:58:18.303000 audit: BPF prog-id=90 op=LOAD Dec 13 22:58:18.303000 audit[2460]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2440 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033306537333365376433343630303866316161626631643765313363 Dec 13 22:58:18.303000 audit: BPF prog-id=90 op=UNLOAD Dec 13 22:58:18.303000 audit[2460]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2440 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033306537333365376433343630303866316161626631643765313363 Dec 13 22:58:18.303000 audit: BPF prog-id=89 op=UNLOAD Dec 13 22:58:18.303000 audit[2460]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2440 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033306537333365376433343630303866316161626631643765313363 Dec 13 22:58:18.303000 audit: BPF prog-id=91 op=LOAD Dec 13 22:58:18.303000 audit[2460]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2440 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.303000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033306537333365376433343630303866316161626631643765313363 Dec 13 22:58:18.307000 audit: BPF prog-id=92 op=LOAD Dec 13 22:58:18.308000 audit: BPF prog-id=93 op=LOAD Dec 13 22:58:18.308000 audit[2473]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2451 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.308000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537363966306533303562366361346132643963663465633064306636 Dec 13 22:58:18.308000 audit: BPF prog-id=93 op=UNLOAD Dec 13 22:58:18.308000 audit[2473]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2451 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.308000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537363966306533303562366361346132643963663465633064306636 Dec 13 22:58:18.308000 audit: BPF prog-id=94 op=LOAD Dec 13 22:58:18.308000 audit[2473]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2451 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.308000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537363966306533303562366361346132643963663465633064306636 Dec 13 22:58:18.308000 audit: BPF prog-id=95 op=LOAD Dec 13 22:58:18.308000 audit[2473]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2451 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.308000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537363966306533303562366361346132643963663465633064306636 Dec 13 22:58:18.308000 audit: BPF prog-id=95 op=UNLOAD Dec 13 22:58:18.308000 audit[2473]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2451 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.308000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537363966306533303562366361346132643963663465633064306636 Dec 13 22:58:18.308000 audit: BPF prog-id=94 op=UNLOAD Dec 13 22:58:18.308000 audit[2473]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2451 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.308000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537363966306533303562366361346132643963663465633064306636 Dec 13 22:58:18.308000 audit: BPF prog-id=96 op=LOAD Dec 13 22:58:18.308000 audit[2473]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2451 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.308000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537363966306533303562366361346132643963663465633064306636 Dec 13 22:58:18.315237 kubelet[2391]: E1213 22:58:18.315124 2391 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.10:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.10:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1880e881b875e3b5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-13 22:58:17.772393397 +0000 UTC m=+0.813036801,LastTimestamp:2025-12-13 22:58:17.772393397 +0000 UTC m=+0.813036801,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 22:58:18.318778 systemd[1]: Started cri-containerd-f9aaa1fd98d774b403e0e360ba9583630b92700207105bacbbc918b092082c7d.scope - libcontainer container f9aaa1fd98d774b403e0e360ba9583630b92700207105bacbbc918b092082c7d. Dec 13 22:58:18.331000 audit: BPF prog-id=97 op=LOAD Dec 13 22:58:18.332000 audit: BPF prog-id=98 op=LOAD Dec 13 22:58:18.332000 audit[2510]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=2498 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.332000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639616161316664393864373734623430336530653336306261393538 Dec 13 22:58:18.332000 audit: BPF prog-id=98 op=UNLOAD Dec 13 22:58:18.332000 audit[2510]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2498 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.332000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639616161316664393864373734623430336530653336306261393538 Dec 13 22:58:18.332000 audit: BPF prog-id=99 op=LOAD Dec 13 22:58:18.332000 audit[2510]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=2498 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.332000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639616161316664393864373734623430336530653336306261393538 Dec 13 22:58:18.334000 audit: BPF prog-id=100 op=LOAD Dec 13 22:58:18.334000 audit[2510]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=2498 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639616161316664393864373734623430336530653336306261393538 Dec 13 22:58:18.334000 audit: BPF prog-id=100 op=UNLOAD Dec 13 22:58:18.334000 audit[2510]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2498 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639616161316664393864373734623430336530653336306261393538 Dec 13 22:58:18.334000 audit: BPF prog-id=99 op=UNLOAD Dec 13 22:58:18.334000 audit[2510]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2498 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639616161316664393864373734623430336530653336306261393538 Dec 13 22:58:18.336568 containerd[1608]: time="2025-12-13T22:58:18.336118197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3c32dc4cf8332c4517a24728eaa8e6d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"030e733e7d346008f1aabf1d7e13c50699e74200f7069c78bffa58512f080f4b\"" Dec 13 22:58:18.334000 audit: BPF prog-id=101 op=LOAD Dec 13 22:58:18.334000 audit[2510]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=2498 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.334000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639616161316664393864373734623430336530653336306261393538 Dec 13 22:58:18.337783 kubelet[2391]: E1213 22:58:18.337751 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:18.337858 containerd[1608]: time="2025-12-13T22:58:18.337760957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:55d9ac750f8c9141f337af8b08cf5c9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e769f0e305b6ca4a2d9cf4ec0d0f64b08defbae79921cfa6e19c57b0641124b2\"" Dec 13 22:58:18.338427 kubelet[2391]: E1213 22:58:18.338409 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:18.340103 containerd[1608]: time="2025-12-13T22:58:18.340020237Z" level=info msg="CreateContainer within sandbox \"e769f0e305b6ca4a2d9cf4ec0d0f64b08defbae79921cfa6e19c57b0641124b2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 22:58:18.340182 containerd[1608]: time="2025-12-13T22:58:18.340066917Z" level=info msg="CreateContainer within sandbox \"030e733e7d346008f1aabf1d7e13c50699e74200f7069c78bffa58512f080f4b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 22:58:18.350834 containerd[1608]: time="2025-12-13T22:58:18.350799037Z" level=info msg="Container 850376f8e4307a506c267802ce23ab30706ef97e561931fe9456fd918b4f8758: CDI devices from CRI Config.CDIDevices: []" Dec 13 22:58:18.351328 containerd[1608]: time="2025-12-13T22:58:18.351304917Z" level=info msg="Container 6d71dd07dae7bfecf3319e48a09bc254519f177325cf29e83e59de3b4d63109b: CDI devices from CRI Config.CDIDevices: []" Dec 13 22:58:18.363136 containerd[1608]: time="2025-12-13T22:58:18.363083757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0a68423804124305a9de061f38780871,Namespace:kube-system,Attempt:0,} returns sandbox id \"f9aaa1fd98d774b403e0e360ba9583630b92700207105bacbbc918b092082c7d\"" Dec 13 22:58:18.363954 kubelet[2391]: E1213 22:58:18.363922 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:18.364253 containerd[1608]: time="2025-12-13T22:58:18.364210557Z" level=info msg="CreateContainer within sandbox \"e769f0e305b6ca4a2d9cf4ec0d0f64b08defbae79921cfa6e19c57b0641124b2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"850376f8e4307a506c267802ce23ab30706ef97e561931fe9456fd918b4f8758\"" Dec 13 22:58:18.365114 containerd[1608]: time="2025-12-13T22:58:18.365079157Z" level=info msg="StartContainer for \"850376f8e4307a506c267802ce23ab30706ef97e561931fe9456fd918b4f8758\"" Dec 13 22:58:18.365988 containerd[1608]: time="2025-12-13T22:58:18.365951917Z" level=info msg="CreateContainer within sandbox \"f9aaa1fd98d774b403e0e360ba9583630b92700207105bacbbc918b092082c7d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 22:58:18.366563 containerd[1608]: time="2025-12-13T22:58:18.366519477Z" level=info msg="connecting to shim 850376f8e4307a506c267802ce23ab30706ef97e561931fe9456fd918b4f8758" address="unix:///run/containerd/s/9e644e2cc5cb3559a4cf83ce6e6a789d5425be037b068ddfce678bc00d338f5e" protocol=ttrpc version=3 Dec 13 22:58:18.368754 containerd[1608]: time="2025-12-13T22:58:18.368722237Z" level=info msg="CreateContainer within sandbox \"030e733e7d346008f1aabf1d7e13c50699e74200f7069c78bffa58512f080f4b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6d71dd07dae7bfecf3319e48a09bc254519f177325cf29e83e59de3b4d63109b\"" Dec 13 22:58:18.369089 containerd[1608]: time="2025-12-13T22:58:18.369053517Z" level=info msg="StartContainer for \"6d71dd07dae7bfecf3319e48a09bc254519f177325cf29e83e59de3b4d63109b\"" Dec 13 22:58:18.370223 containerd[1608]: time="2025-12-13T22:58:18.370196037Z" level=info msg="connecting to shim 6d71dd07dae7bfecf3319e48a09bc254519f177325cf29e83e59de3b4d63109b" address="unix:///run/containerd/s/fb815fefe5804e0b29ebd65c00073eb546c664e892f5d22ee111abeac6c16bcb" protocol=ttrpc version=3 Dec 13 22:58:18.372419 containerd[1608]: time="2025-12-13T22:58:18.372388317Z" level=info msg="Container f8aa7c1dbe11034e07e5c85b94d4d1033b82f7bada3ab9ae99ff71ec1cf8ee4b: CDI devices from CRI Config.CDIDevices: []" Dec 13 22:58:18.380792 kubelet[2391]: E1213 22:58:18.380747 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="800ms" Dec 13 22:58:18.382725 containerd[1608]: time="2025-12-13T22:58:18.382688277Z" level=info msg="CreateContainer within sandbox \"f9aaa1fd98d774b403e0e360ba9583630b92700207105bacbbc918b092082c7d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f8aa7c1dbe11034e07e5c85b94d4d1033b82f7bada3ab9ae99ff71ec1cf8ee4b\"" Dec 13 22:58:18.384197 containerd[1608]: time="2025-12-13T22:58:18.383130957Z" level=info msg="StartContainer for \"f8aa7c1dbe11034e07e5c85b94d4d1033b82f7bada3ab9ae99ff71ec1cf8ee4b\"" Dec 13 22:58:18.384197 containerd[1608]: time="2025-12-13T22:58:18.384137237Z" level=info msg="connecting to shim f8aa7c1dbe11034e07e5c85b94d4d1033b82f7bada3ab9ae99ff71ec1cf8ee4b" address="unix:///run/containerd/s/d0cb9c504c3b39c90f7b063981478c27f97dddd69542d1eeabb67b43745bb066" protocol=ttrpc version=3 Dec 13 22:58:18.385742 systemd[1]: Started cri-containerd-850376f8e4307a506c267802ce23ab30706ef97e561931fe9456fd918b4f8758.scope - libcontainer container 850376f8e4307a506c267802ce23ab30706ef97e561931fe9456fd918b4f8758. Dec 13 22:58:18.389326 systemd[1]: Started cri-containerd-6d71dd07dae7bfecf3319e48a09bc254519f177325cf29e83e59de3b4d63109b.scope - libcontainer container 6d71dd07dae7bfecf3319e48a09bc254519f177325cf29e83e59de3b4d63109b. Dec 13 22:58:18.398000 audit: BPF prog-id=102 op=LOAD Dec 13 22:58:18.398000 audit: BPF prog-id=103 op=LOAD Dec 13 22:58:18.398000 audit[2566]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=2451 pid=2566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835303337366638653433303761353036633236373830326365323361 Dec 13 22:58:18.398000 audit: BPF prog-id=103 op=UNLOAD Dec 13 22:58:18.398000 audit[2566]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2451 pid=2566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835303337366638653433303761353036633236373830326365323361 Dec 13 22:58:18.399000 audit: BPF prog-id=104 op=LOAD Dec 13 22:58:18.399000 audit[2566]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=2451 pid=2566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835303337366638653433303761353036633236373830326365323361 Dec 13 22:58:18.399000 audit: BPF prog-id=105 op=LOAD Dec 13 22:58:18.399000 audit[2566]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=2451 pid=2566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835303337366638653433303761353036633236373830326365323361 Dec 13 22:58:18.399000 audit: BPF prog-id=105 op=UNLOAD Dec 13 22:58:18.399000 audit[2566]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2451 pid=2566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835303337366638653433303761353036633236373830326365323361 Dec 13 22:58:18.399000 audit: BPF prog-id=104 op=UNLOAD Dec 13 22:58:18.399000 audit[2566]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2451 pid=2566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835303337366638653433303761353036633236373830326365323361 Dec 13 22:58:18.399000 audit: BPF prog-id=106 op=LOAD Dec 13 22:58:18.399000 audit[2566]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=2451 pid=2566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835303337366638653433303761353036633236373830326365323361 Dec 13 22:58:18.409747 systemd[1]: Started cri-containerd-f8aa7c1dbe11034e07e5c85b94d4d1033b82f7bada3ab9ae99ff71ec1cf8ee4b.scope - libcontainer container f8aa7c1dbe11034e07e5c85b94d4d1033b82f7bada3ab9ae99ff71ec1cf8ee4b. Dec 13 22:58:18.412000 audit: BPF prog-id=107 op=LOAD Dec 13 22:58:18.413000 audit: BPF prog-id=108 op=LOAD Dec 13 22:58:18.413000 audit[2573]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2440 pid=2573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.413000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664373164643037646165376266656366333331396534386130396263 Dec 13 22:58:18.413000 audit: BPF prog-id=108 op=UNLOAD Dec 13 22:58:18.413000 audit[2573]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2440 pid=2573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.413000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664373164643037646165376266656366333331396534386130396263 Dec 13 22:58:18.413000 audit: BPF prog-id=109 op=LOAD Dec 13 22:58:18.413000 audit[2573]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2440 pid=2573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.413000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664373164643037646165376266656366333331396534386130396263 Dec 13 22:58:18.413000 audit: BPF prog-id=110 op=LOAD Dec 13 22:58:18.413000 audit[2573]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2440 pid=2573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.413000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664373164643037646165376266656366333331396534386130396263 Dec 13 22:58:18.413000 audit: BPF prog-id=110 op=UNLOAD Dec 13 22:58:18.413000 audit[2573]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2440 pid=2573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.413000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664373164643037646165376266656366333331396534386130396263 Dec 13 22:58:18.413000 audit: BPF prog-id=109 op=UNLOAD Dec 13 22:58:18.413000 audit[2573]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2440 pid=2573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.413000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664373164643037646165376266656366333331396534386130396263 Dec 13 22:58:18.413000 audit: BPF prog-id=111 op=LOAD Dec 13 22:58:18.413000 audit[2573]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2440 pid=2573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.413000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664373164643037646165376266656366333331396534386130396263 Dec 13 22:58:18.425000 audit: BPF prog-id=112 op=LOAD Dec 13 22:58:18.427491 containerd[1608]: time="2025-12-13T22:58:18.427186037Z" level=info msg="StartContainer for \"850376f8e4307a506c267802ce23ab30706ef97e561931fe9456fd918b4f8758\" returns successfully" Dec 13 22:58:18.426000 audit: BPF prog-id=113 op=LOAD Dec 13 22:58:18.426000 audit[2591]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=2498 pid=2591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.426000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638616137633164626531313033346530376535633835623934643464 Dec 13 22:58:18.426000 audit: BPF prog-id=113 op=UNLOAD Dec 13 22:58:18.426000 audit[2591]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2498 pid=2591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.426000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638616137633164626531313033346530376535633835623934643464 Dec 13 22:58:18.427000 audit: BPF prog-id=114 op=LOAD Dec 13 22:58:18.427000 audit[2591]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=2498 pid=2591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638616137633164626531313033346530376535633835623934643464 Dec 13 22:58:18.427000 audit: BPF prog-id=115 op=LOAD Dec 13 22:58:18.427000 audit[2591]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=2498 pid=2591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638616137633164626531313033346530376535633835623934643464 Dec 13 22:58:18.427000 audit: BPF prog-id=115 op=UNLOAD Dec 13 22:58:18.427000 audit[2591]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2498 pid=2591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638616137633164626531313033346530376535633835623934643464 Dec 13 22:58:18.427000 audit: BPF prog-id=114 op=UNLOAD Dec 13 22:58:18.427000 audit[2591]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2498 pid=2591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638616137633164626531313033346530376535633835623934643464 Dec 13 22:58:18.427000 audit: BPF prog-id=116 op=LOAD Dec 13 22:58:18.427000 audit[2591]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=2498 pid=2591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:18.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6638616137633164626531313033346530376535633835623934643464 Dec 13 22:58:18.448998 containerd[1608]: time="2025-12-13T22:58:18.448737677Z" level=info msg="StartContainer for \"6d71dd07dae7bfecf3319e48a09bc254519f177325cf29e83e59de3b4d63109b\" returns successfully" Dec 13 22:58:18.466765 containerd[1608]: time="2025-12-13T22:58:18.466731277Z" level=info msg="StartContainer for \"f8aa7c1dbe11034e07e5c85b94d4d1033b82f7bada3ab9ae99ff71ec1cf8ee4b\" returns successfully" Dec 13 22:58:18.542620 kubelet[2391]: I1213 22:58:18.542584 2391 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 22:58:18.808029 kubelet[2391]: E1213 22:58:18.807935 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 22:58:18.808130 kubelet[2391]: E1213 22:58:18.808082 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:18.811782 kubelet[2391]: E1213 22:58:18.811758 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 22:58:18.812077 kubelet[2391]: E1213 22:58:18.812054 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:18.813547 kubelet[2391]: E1213 22:58:18.813524 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 22:58:18.813676 kubelet[2391]: E1213 22:58:18.813661 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:19.816432 kubelet[2391]: E1213 22:58:19.816379 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 22:58:19.817094 kubelet[2391]: E1213 22:58:19.817070 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 22:58:19.817218 kubelet[2391]: E1213 22:58:19.817067 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:19.817218 kubelet[2391]: E1213 22:58:19.817174 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:20.554667 kubelet[2391]: E1213 22:58:20.554502 2391 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 22:58:20.692892 kubelet[2391]: I1213 22:58:20.692829 2391 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 13 22:58:20.773717 kubelet[2391]: I1213 22:58:20.773644 2391 apiserver.go:52] "Watching apiserver" Dec 13 22:58:20.777654 kubelet[2391]: I1213 22:58:20.777607 2391 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 13 22:58:20.778638 kubelet[2391]: I1213 22:58:20.778618 2391 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 13 22:58:20.784581 kubelet[2391]: E1213 22:58:20.784313 2391 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 13 22:58:20.784581 kubelet[2391]: I1213 22:58:20.784344 2391 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 13 22:58:20.788444 kubelet[2391]: E1213 22:58:20.788142 2391 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 13 22:58:20.788444 kubelet[2391]: I1213 22:58:20.788167 2391 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 13 22:58:20.790620 kubelet[2391]: E1213 22:58:20.790587 2391 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 13 22:58:20.824452 kubelet[2391]: I1213 22:58:20.823814 2391 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 13 22:58:20.825803 kubelet[2391]: E1213 22:58:20.825637 2391 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 13 22:58:20.825803 kubelet[2391]: E1213 22:58:20.825806 2391 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:22.411290 systemd[1]: Reload requested from client PID 2664 ('systemctl') (unit session-8.scope)... Dec 13 22:58:22.411311 systemd[1]: Reloading... Dec 13 22:58:22.478600 zram_generator::config[2710]: No configuration found. Dec 13 22:58:22.682723 systemd[1]: Reloading finished in 271 ms. Dec 13 22:58:22.701273 kubelet[2391]: I1213 22:58:22.701135 2391 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 22:58:22.702353 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 22:58:22.716528 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 22:58:22.716890 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 22:58:22.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:58:22.719795 kernel: kauditd_printk_skb: 203 callbacks suppressed Dec 13 22:58:22.719850 kernel: audit: type=1131 audit(1765666702.715:393): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:58:22.716959 systemd[1]: kubelet.service: Consumed 1.175s CPU time, 129.6M memory peak. Dec 13 22:58:22.718808 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 22:58:22.718000 audit: BPF prog-id=117 op=LOAD Dec 13 22:58:22.720730 kernel: audit: type=1334 audit(1765666702.718:394): prog-id=117 op=LOAD Dec 13 22:58:22.720764 kernel: audit: type=1334 audit(1765666702.718:395): prog-id=86 op=UNLOAD Dec 13 22:58:22.718000 audit: BPF prog-id=86 op=UNLOAD Dec 13 22:58:22.720000 audit: BPF prog-id=118 op=LOAD Dec 13 22:58:22.720000 audit: BPF prog-id=79 op=UNLOAD Dec 13 22:58:22.723074 kernel: audit: type=1334 audit(1765666702.720:396): prog-id=118 op=LOAD Dec 13 22:58:22.723103 kernel: audit: type=1334 audit(1765666702.720:397): prog-id=79 op=UNLOAD Dec 13 22:58:22.723126 kernel: audit: type=1334 audit(1765666702.720:398): prog-id=119 op=LOAD Dec 13 22:58:22.723147 kernel: audit: type=1334 audit(1765666702.720:399): prog-id=120 op=LOAD Dec 13 22:58:22.723160 kernel: audit: type=1334 audit(1765666702.720:400): prog-id=80 op=UNLOAD Dec 13 22:58:22.723175 kernel: audit: type=1334 audit(1765666702.720:401): prog-id=81 op=UNLOAD Dec 13 22:58:22.720000 audit: BPF prog-id=119 op=LOAD Dec 13 22:58:22.720000 audit: BPF prog-id=120 op=LOAD Dec 13 22:58:22.720000 audit: BPF prog-id=80 op=UNLOAD Dec 13 22:58:22.720000 audit: BPF prog-id=81 op=UNLOAD Dec 13 22:58:22.722000 audit: BPF prog-id=121 op=LOAD Dec 13 22:58:22.722000 audit: BPF prog-id=76 op=UNLOAD Dec 13 22:58:22.723000 audit: BPF prog-id=122 op=LOAD Dec 13 22:58:22.724000 audit: BPF prog-id=123 op=LOAD Dec 13 22:58:22.726577 kernel: audit: type=1334 audit(1765666702.722:402): prog-id=121 op=LOAD Dec 13 22:58:22.737000 audit: BPF prog-id=77 op=UNLOAD Dec 13 22:58:22.737000 audit: BPF prog-id=78 op=UNLOAD Dec 13 22:58:22.737000 audit: BPF prog-id=124 op=LOAD Dec 13 22:58:22.737000 audit: BPF prog-id=82 op=UNLOAD Dec 13 22:58:22.737000 audit: BPF prog-id=125 op=LOAD Dec 13 22:58:22.737000 audit: BPF prog-id=126 op=LOAD Dec 13 22:58:22.737000 audit: BPF prog-id=83 op=UNLOAD Dec 13 22:58:22.737000 audit: BPF prog-id=84 op=UNLOAD Dec 13 22:58:22.738000 audit: BPF prog-id=127 op=LOAD Dec 13 22:58:22.738000 audit: BPF prog-id=73 op=UNLOAD Dec 13 22:58:22.738000 audit: BPF prog-id=128 op=LOAD Dec 13 22:58:22.738000 audit: BPF prog-id=129 op=LOAD Dec 13 22:58:22.738000 audit: BPF prog-id=74 op=UNLOAD Dec 13 22:58:22.738000 audit: BPF prog-id=75 op=UNLOAD Dec 13 22:58:22.739000 audit: BPF prog-id=130 op=LOAD Dec 13 22:58:22.739000 audit: BPF prog-id=131 op=LOAD Dec 13 22:58:22.739000 audit: BPF prog-id=67 op=UNLOAD Dec 13 22:58:22.739000 audit: BPF prog-id=68 op=UNLOAD Dec 13 22:58:22.740000 audit: BPF prog-id=132 op=LOAD Dec 13 22:58:22.740000 audit: BPF prog-id=69 op=UNLOAD Dec 13 22:58:22.741000 audit: BPF prog-id=133 op=LOAD Dec 13 22:58:22.741000 audit: BPF prog-id=85 op=UNLOAD Dec 13 22:58:22.741000 audit: BPF prog-id=134 op=LOAD Dec 13 22:58:22.741000 audit: BPF prog-id=70 op=UNLOAD Dec 13 22:58:22.742000 audit: BPF prog-id=135 op=LOAD Dec 13 22:58:22.742000 audit: BPF prog-id=136 op=LOAD Dec 13 22:58:22.742000 audit: BPF prog-id=71 op=UNLOAD Dec 13 22:58:22.742000 audit: BPF prog-id=72 op=UNLOAD Dec 13 22:58:22.879442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 22:58:22.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:58:22.882998 (kubelet)[2752]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 22:58:22.942838 kubelet[2752]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 22:58:22.942838 kubelet[2752]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 13 22:58:22.942838 kubelet[2752]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 22:58:22.943153 kubelet[2752]: I1213 22:58:22.942830 2752 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 22:58:22.951268 kubelet[2752]: I1213 22:58:22.951181 2752 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 13 22:58:22.951268 kubelet[2752]: I1213 22:58:22.951254 2752 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 22:58:22.951529 kubelet[2752]: I1213 22:58:22.951497 2752 server.go:954] "Client rotation is on, will bootstrap in background" Dec 13 22:58:22.955773 kubelet[2752]: I1213 22:58:22.955740 2752 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 22:58:22.958405 kubelet[2752]: I1213 22:58:22.958237 2752 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 22:58:22.964985 kubelet[2752]: I1213 22:58:22.964964 2752 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 13 22:58:22.971787 kubelet[2752]: I1213 22:58:22.971750 2752 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 22:58:22.972006 kubelet[2752]: I1213 22:58:22.971968 2752 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 22:58:22.972163 kubelet[2752]: I1213 22:58:22.972002 2752 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 22:58:22.972236 kubelet[2752]: I1213 22:58:22.972167 2752 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 22:58:22.972236 kubelet[2752]: I1213 22:58:22.972176 2752 container_manager_linux.go:304] "Creating device plugin manager" Dec 13 22:58:22.972236 kubelet[2752]: I1213 22:58:22.972216 2752 state_mem.go:36] "Initialized new in-memory state store" Dec 13 22:58:22.972364 kubelet[2752]: I1213 22:58:22.972347 2752 kubelet.go:446] "Attempting to sync node with API server" Dec 13 22:58:22.972364 kubelet[2752]: I1213 22:58:22.972362 2752 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 22:58:22.972609 kubelet[2752]: I1213 22:58:22.972382 2752 kubelet.go:352] "Adding apiserver pod source" Dec 13 22:58:22.972609 kubelet[2752]: I1213 22:58:22.972392 2752 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 22:58:22.976623 kubelet[2752]: I1213 22:58:22.975757 2752 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 13 22:58:22.980051 kubelet[2752]: I1213 22:58:22.976796 2752 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 22:58:22.980051 kubelet[2752]: I1213 22:58:22.977979 2752 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 13 22:58:22.980051 kubelet[2752]: I1213 22:58:22.978009 2752 server.go:1287] "Started kubelet" Dec 13 22:58:22.980051 kubelet[2752]: I1213 22:58:22.978323 2752 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 22:58:22.980051 kubelet[2752]: I1213 22:58:22.978822 2752 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 22:58:22.980051 kubelet[2752]: I1213 22:58:22.978996 2752 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 22:58:22.980181 kubelet[2752]: I1213 22:58:22.980095 2752 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 22:58:22.983687 kubelet[2752]: I1213 22:58:22.980454 2752 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 13 22:58:22.983687 kubelet[2752]: I1213 22:58:22.981204 2752 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 13 22:58:22.983687 kubelet[2752]: I1213 22:58:22.981868 2752 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 22:58:22.985570 kubelet[2752]: I1213 22:58:22.982026 2752 reconciler.go:26] "Reconciler: start to sync state" Dec 13 22:58:22.985570 kubelet[2752]: I1213 22:58:22.983925 2752 server.go:479] "Adding debug handlers to kubelet server" Dec 13 22:58:22.987933 kubelet[2752]: E1213 22:58:22.983977 2752 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 22:58:22.990218 kubelet[2752]: I1213 22:58:22.990191 2752 factory.go:221] Registration of the containerd container factory successfully Dec 13 22:58:22.990218 kubelet[2752]: I1213 22:58:22.990213 2752 factory.go:221] Registration of the systemd container factory successfully Dec 13 22:58:22.991637 kubelet[2752]: I1213 22:58:22.990342 2752 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 22:58:23.016899 kubelet[2752]: I1213 22:58:23.016698 2752 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 22:58:23.020709 kubelet[2752]: I1213 22:58:23.020678 2752 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 22:58:23.020843 kubelet[2752]: I1213 22:58:23.020834 2752 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 13 22:58:23.020913 kubelet[2752]: I1213 22:58:23.020903 2752 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 13 22:58:23.020968 kubelet[2752]: I1213 22:58:23.020959 2752 kubelet.go:2382] "Starting kubelet main sync loop" Dec 13 22:58:23.021678 kubelet[2752]: E1213 22:58:23.021653 2752 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 22:58:23.057712 kubelet[2752]: I1213 22:58:23.057683 2752 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 13 22:58:23.057712 kubelet[2752]: I1213 22:58:23.057704 2752 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 13 22:58:23.057853 kubelet[2752]: I1213 22:58:23.057727 2752 state_mem.go:36] "Initialized new in-memory state store" Dec 13 22:58:23.057909 kubelet[2752]: I1213 22:58:23.057892 2752 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 22:58:23.057938 kubelet[2752]: I1213 22:58:23.057906 2752 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 22:58:23.057938 kubelet[2752]: I1213 22:58:23.057925 2752 policy_none.go:49] "None policy: Start" Dec 13 22:58:23.057938 kubelet[2752]: I1213 22:58:23.057933 2752 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 13 22:58:23.058000 kubelet[2752]: I1213 22:58:23.057942 2752 state_mem.go:35] "Initializing new in-memory state store" Dec 13 22:58:23.058054 kubelet[2752]: I1213 22:58:23.058041 2752 state_mem.go:75] "Updated machine memory state" Dec 13 22:58:23.061671 kubelet[2752]: I1213 22:58:23.061645 2752 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 22:58:23.061822 kubelet[2752]: I1213 22:58:23.061806 2752 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 22:58:23.061851 kubelet[2752]: I1213 22:58:23.061824 2752 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 22:58:23.062371 kubelet[2752]: I1213 22:58:23.062338 2752 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 22:58:23.063480 kubelet[2752]: E1213 22:58:23.063412 2752 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 13 22:58:23.122840 kubelet[2752]: I1213 22:58:23.122790 2752 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 13 22:58:23.122943 kubelet[2752]: I1213 22:58:23.122861 2752 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 13 22:58:23.122943 kubelet[2752]: I1213 22:58:23.122864 2752 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 13 22:58:23.166931 kubelet[2752]: I1213 22:58:23.166898 2752 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 22:58:23.174100 kubelet[2752]: I1213 22:58:23.173425 2752 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 13 22:58:23.174100 kubelet[2752]: I1213 22:58:23.173512 2752 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 13 22:58:23.186015 kubelet[2752]: I1213 22:58:23.185966 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 22:58:23.186015 kubelet[2752]: I1213 22:58:23.186010 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 22:58:23.186158 kubelet[2752]: I1213 22:58:23.186035 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a68423804124305a9de061f38780871-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0a68423804124305a9de061f38780871\") " pod="kube-system/kube-scheduler-localhost" Dec 13 22:58:23.186158 kubelet[2752]: I1213 22:58:23.186052 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3c32dc4cf8332c4517a24728eaa8e6d1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3c32dc4cf8332c4517a24728eaa8e6d1\") " pod="kube-system/kube-apiserver-localhost" Dec 13 22:58:23.186158 kubelet[2752]: I1213 22:58:23.186067 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3c32dc4cf8332c4517a24728eaa8e6d1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3c32dc4cf8332c4517a24728eaa8e6d1\") " pod="kube-system/kube-apiserver-localhost" Dec 13 22:58:23.186158 kubelet[2752]: I1213 22:58:23.186084 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3c32dc4cf8332c4517a24728eaa8e6d1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3c32dc4cf8332c4517a24728eaa8e6d1\") " pod="kube-system/kube-apiserver-localhost" Dec 13 22:58:23.186158 kubelet[2752]: I1213 22:58:23.186099 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 22:58:23.186267 kubelet[2752]: I1213 22:58:23.186113 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 22:58:23.186267 kubelet[2752]: I1213 22:58:23.186127 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 22:58:23.428586 kubelet[2752]: E1213 22:58:23.428523 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:23.428784 kubelet[2752]: E1213 22:58:23.428734 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:23.428917 kubelet[2752]: E1213 22:58:23.428878 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:23.973045 kubelet[2752]: I1213 22:58:23.973006 2752 apiserver.go:52] "Watching apiserver" Dec 13 22:58:23.983922 kubelet[2752]: I1213 22:58:23.982944 2752 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 13 22:58:24.045830 kubelet[2752]: I1213 22:58:24.045403 2752 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 13 22:58:24.046216 kubelet[2752]: E1213 22:58:24.045863 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:24.046488 kubelet[2752]: E1213 22:58:24.045942 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:24.056440 kubelet[2752]: E1213 22:58:24.055986 2752 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 22:58:24.056802 kubelet[2752]: E1213 22:58:24.056684 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:24.080973 kubelet[2752]: I1213 22:58:24.080910 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.080892237 podStartE2EDuration="1.080892237s" podCreationTimestamp="2025-12-13 22:58:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:58:24.077234357 +0000 UTC m=+1.190994801" watchObservedRunningTime="2025-12-13 22:58:24.080892237 +0000 UTC m=+1.194652681" Dec 13 22:58:24.081141 kubelet[2752]: I1213 22:58:24.081022 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.081018197 podStartE2EDuration="1.081018197s" podCreationTimestamp="2025-12-13 22:58:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:58:24.067532197 +0000 UTC m=+1.181292641" watchObservedRunningTime="2025-12-13 22:58:24.081018197 +0000 UTC m=+1.194778641" Dec 13 22:58:24.122513 kubelet[2752]: I1213 22:58:24.122237 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.122216477 podStartE2EDuration="1.122216477s" podCreationTimestamp="2025-12-13 22:58:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:58:24.086155597 +0000 UTC m=+1.199916041" watchObservedRunningTime="2025-12-13 22:58:24.122216477 +0000 UTC m=+1.235976961" Dec 13 22:58:25.047033 kubelet[2752]: E1213 22:58:25.046967 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:25.047033 kubelet[2752]: E1213 22:58:25.046992 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:26.745509 kubelet[2752]: E1213 22:58:26.745458 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:28.615995 kubelet[2752]: I1213 22:58:28.615938 2752 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 22:58:28.617041 containerd[1608]: time="2025-12-13T22:58:28.616972077Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 22:58:28.618041 kubelet[2752]: I1213 22:58:28.618011 2752 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 22:58:29.099353 kubelet[2752]: E1213 22:58:29.099307 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:29.220946 kubelet[2752]: I1213 22:58:29.220769 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/60e02a2c-92f2-4975-a960-251c51d613f1-kube-proxy\") pod \"kube-proxy-6bxm7\" (UID: \"60e02a2c-92f2-4975-a960-251c51d613f1\") " pod="kube-system/kube-proxy-6bxm7" Dec 13 22:58:29.220946 kubelet[2752]: I1213 22:58:29.220804 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60e02a2c-92f2-4975-a960-251c51d613f1-xtables-lock\") pod \"kube-proxy-6bxm7\" (UID: \"60e02a2c-92f2-4975-a960-251c51d613f1\") " pod="kube-system/kube-proxy-6bxm7" Dec 13 22:58:29.220946 kubelet[2752]: I1213 22:58:29.220825 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60e02a2c-92f2-4975-a960-251c51d613f1-lib-modules\") pod \"kube-proxy-6bxm7\" (UID: \"60e02a2c-92f2-4975-a960-251c51d613f1\") " pod="kube-system/kube-proxy-6bxm7" Dec 13 22:58:29.220946 kubelet[2752]: I1213 22:58:29.220843 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnqb7\" (UniqueName: \"kubernetes.io/projected/60e02a2c-92f2-4975-a960-251c51d613f1-kube-api-access-dnqb7\") pod \"kube-proxy-6bxm7\" (UID: \"60e02a2c-92f2-4975-a960-251c51d613f1\") " pod="kube-system/kube-proxy-6bxm7" Dec 13 22:58:29.223764 systemd[1]: Created slice kubepods-besteffort-pod60e02a2c_92f2_4975_a960_251c51d613f1.slice - libcontainer container kubepods-besteffort-pod60e02a2c_92f2_4975_a960_251c51d613f1.slice. Dec 13 22:58:29.537119 kubelet[2752]: E1213 22:58:29.537090 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:29.537855 containerd[1608]: time="2025-12-13T22:58:29.537803437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6bxm7,Uid:60e02a2c-92f2-4975-a960-251c51d613f1,Namespace:kube-system,Attempt:0,}" Dec 13 22:58:29.565485 containerd[1608]: time="2025-12-13T22:58:29.565439397Z" level=info msg="connecting to shim 2f5a56d6ff32434df0830d7f90d6f2268898f018eb84a1981c18febb3a7d1140" address="unix:///run/containerd/s/7f168494caef3295bd4c2702dcac17b1496f454a0eee255b5cdd9eb89e476466" namespace=k8s.io protocol=ttrpc version=3 Dec 13 22:58:29.611796 systemd[1]: Started cri-containerd-2f5a56d6ff32434df0830d7f90d6f2268898f018eb84a1981c18febb3a7d1140.scope - libcontainer container 2f5a56d6ff32434df0830d7f90d6f2268898f018eb84a1981c18febb3a7d1140. Dec 13 22:58:29.620000 audit: BPF prog-id=137 op=LOAD Dec 13 22:58:29.622994 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 13 22:58:29.623088 kernel: audit: type=1334 audit(1765666709.620:435): prog-id=137 op=LOAD Dec 13 22:58:29.621000 audit: BPF prog-id=138 op=LOAD Dec 13 22:58:29.623885 kernel: audit: type=1334 audit(1765666709.621:436): prog-id=138 op=LOAD Dec 13 22:58:29.623915 kernel: audit: type=1300 audit(1765666709.621:436): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=2812 pid=2823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:29.621000 audit[2823]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=2812 pid=2823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:29.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266356135366436666633323433346466303833306437663930643666 Dec 13 22:58:29.629678 kernel: audit: type=1327 audit(1765666709.621:436): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266356135366436666633323433346466303833306437663930643666 Dec 13 22:58:29.629726 kernel: audit: type=1334 audit(1765666709.621:437): prog-id=138 op=UNLOAD Dec 13 22:58:29.621000 audit: BPF prog-id=138 op=UNLOAD Dec 13 22:58:29.621000 audit[2823]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2812 pid=2823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:29.633402 kernel: audit: type=1300 audit(1765666709.621:437): arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2812 pid=2823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:29.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266356135366436666633323433346466303833306437663930643666 Dec 13 22:58:29.636488 kernel: audit: type=1327 audit(1765666709.621:437): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266356135366436666633323433346466303833306437663930643666 Dec 13 22:58:29.636542 kernel: audit: type=1334 audit(1765666709.621:438): prog-id=139 op=LOAD Dec 13 22:58:29.621000 audit: BPF prog-id=139 op=LOAD Dec 13 22:58:29.621000 audit[2823]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=2812 pid=2823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:29.640341 kernel: audit: type=1300 audit(1765666709.621:438): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=2812 pid=2823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:29.640389 kernel: audit: type=1327 audit(1765666709.621:438): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266356135366436666633323433346466303833306437663930643666 Dec 13 22:58:29.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266356135366436666633323433346466303833306437663930643666 Dec 13 22:58:29.622000 audit: BPF prog-id=140 op=LOAD Dec 13 22:58:29.622000 audit[2823]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=2812 pid=2823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:29.622000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266356135366436666633323433346466303833306437663930643666 Dec 13 22:58:29.625000 audit: BPF prog-id=140 op=UNLOAD Dec 13 22:58:29.625000 audit[2823]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2812 pid=2823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:29.625000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266356135366436666633323433346466303833306437663930643666 Dec 13 22:58:29.625000 audit: BPF prog-id=139 op=UNLOAD Dec 13 22:58:29.625000 audit[2823]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2812 pid=2823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:29.625000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266356135366436666633323433346466303833306437663930643666 Dec 13 22:58:29.625000 audit: BPF prog-id=141 op=LOAD Dec 13 22:58:29.625000 audit[2823]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=2812 pid=2823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:29.625000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266356135366436666633323433346466303833306437663930643666 Dec 13 22:58:29.662857 containerd[1608]: time="2025-12-13T22:58:29.662814797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6bxm7,Uid:60e02a2c-92f2-4975-a960-251c51d613f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f5a56d6ff32434df0830d7f90d6f2268898f018eb84a1981c18febb3a7d1140\"" Dec 13 22:58:29.664276 kubelet[2752]: E1213 22:58:29.664254 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:29.669484 containerd[1608]: time="2025-12-13T22:58:29.669446877Z" level=info msg="CreateContainer within sandbox \"2f5a56d6ff32434df0830d7f90d6f2268898f018eb84a1981c18febb3a7d1140\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 22:58:29.687099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3012768027.mount: Deactivated successfully. Dec 13 22:58:29.690839 containerd[1608]: time="2025-12-13T22:58:29.689761037Z" level=info msg="Container 6c27e6a68d0fab2d190b61cca4745ee6999218dd03ee0dc289021ac42dfdb077: CDI devices from CRI Config.CDIDevices: []" Dec 13 22:58:29.697087 kubelet[2752]: I1213 22:58:29.697048 2752 status_manager.go:890] "Failed to get status for pod" podUID="a7ac630a-6575-4c58-8a31-d50e6349421c" pod="tigera-operator/tigera-operator-7dcd859c48-6r94j" err="pods \"tigera-operator-7dcd859c48-6r94j\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'localhost' and this object" Dec 13 22:58:29.701369 containerd[1608]: time="2025-12-13T22:58:29.700928557Z" level=info msg="CreateContainer within sandbox \"2f5a56d6ff32434df0830d7f90d6f2268898f018eb84a1981c18febb3a7d1140\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6c27e6a68d0fab2d190b61cca4745ee6999218dd03ee0dc289021ac42dfdb077\"" Dec 13 22:58:29.702069 containerd[1608]: time="2025-12-13T22:58:29.702036037Z" level=info msg="StartContainer for \"6c27e6a68d0fab2d190b61cca4745ee6999218dd03ee0dc289021ac42dfdb077\"" Dec 13 22:58:29.704096 systemd[1]: Created slice kubepods-besteffort-poda7ac630a_6575_4c58_8a31_d50e6349421c.slice - libcontainer container kubepods-besteffort-poda7ac630a_6575_4c58_8a31_d50e6349421c.slice. Dec 13 22:58:29.704749 containerd[1608]: time="2025-12-13T22:58:29.704609157Z" level=info msg="connecting to shim 6c27e6a68d0fab2d190b61cca4745ee6999218dd03ee0dc289021ac42dfdb077" address="unix:///run/containerd/s/7f168494caef3295bd4c2702dcac17b1496f454a0eee255b5cdd9eb89e476466" protocol=ttrpc version=3 Dec 13 22:58:29.722961 kubelet[2752]: I1213 22:58:29.722925 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a7ac630a-6575-4c58-8a31-d50e6349421c-var-lib-calico\") pod \"tigera-operator-7dcd859c48-6r94j\" (UID: \"a7ac630a-6575-4c58-8a31-d50e6349421c\") " pod="tigera-operator/tigera-operator-7dcd859c48-6r94j" Dec 13 22:58:29.723157 kubelet[2752]: I1213 22:58:29.723116 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnx6r\" (UniqueName: \"kubernetes.io/projected/a7ac630a-6575-4c58-8a31-d50e6349421c-kube-api-access-tnx6r\") pod \"tigera-operator-7dcd859c48-6r94j\" (UID: \"a7ac630a-6575-4c58-8a31-d50e6349421c\") " pod="tigera-operator/tigera-operator-7dcd859c48-6r94j" Dec 13 22:58:29.727779 systemd[1]: Started cri-containerd-6c27e6a68d0fab2d190b61cca4745ee6999218dd03ee0dc289021ac42dfdb077.scope - libcontainer container 6c27e6a68d0fab2d190b61cca4745ee6999218dd03ee0dc289021ac42dfdb077. Dec 13 22:58:29.785000 audit: BPF prog-id=142 op=LOAD Dec 13 22:58:29.785000 audit[2850]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2812 pid=2850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:29.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663323765366136386430666162326431393062363163636134373435 Dec 13 22:58:29.785000 audit: BPF prog-id=143 op=LOAD Dec 13 22:58:29.785000 audit[2850]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2812 pid=2850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:29.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663323765366136386430666162326431393062363163636134373435 Dec 13 22:58:29.785000 audit: BPF prog-id=143 op=UNLOAD Dec 13 22:58:29.785000 audit[2850]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2812 pid=2850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:29.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663323765366136386430666162326431393062363163636134373435 Dec 13 22:58:29.785000 audit: BPF prog-id=142 op=UNLOAD Dec 13 22:58:29.785000 audit[2850]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2812 pid=2850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:29.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663323765366136386430666162326431393062363163636134373435 Dec 13 22:58:29.785000 audit: BPF prog-id=144 op=LOAD Dec 13 22:58:29.785000 audit[2850]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2812 pid=2850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:29.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663323765366136386430666162326431393062363163636134373435 Dec 13 22:58:29.803168 containerd[1608]: time="2025-12-13T22:58:29.803129837Z" level=info msg="StartContainer for \"6c27e6a68d0fab2d190b61cca4745ee6999218dd03ee0dc289021ac42dfdb077\" returns successfully" Dec 13 22:58:29.953000 audit[2914]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=2914 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:29.953000 audit[2914]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdaa9d6c0 a2=0 a3=1 items=0 ppid=2863 pid=2914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:29.953000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 22:58:29.953000 audit[2915]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=2915 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:29.953000 audit[2915]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff75204b0 a2=0 a3=1 items=0 ppid=2863 pid=2915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:29.953000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 22:58:29.957000 audit[2916]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=2916 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:29.957000 audit[2916]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff2da9700 a2=0 a3=1 items=0 ppid=2863 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:29.957000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 22:58:29.957000 audit[2917]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=2917 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:29.957000 audit[2917]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffef3d8db0 a2=0 a3=1 items=0 ppid=2863 pid=2917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:29.957000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 22:58:29.958000 audit[2918]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_chain pid=2918 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:29.958000 audit[2918]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd8d8d7c0 a2=0 a3=1 items=0 ppid=2863 pid=2918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:29.958000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 22:58:29.959000 audit[2919]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=2919 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:29.959000 audit[2919]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe7c38640 a2=0 a3=1 items=0 ppid=2863 pid=2919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:29.959000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 22:58:30.008003 containerd[1608]: time="2025-12-13T22:58:30.007952277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-6r94j,Uid:a7ac630a-6575-4c58-8a31-d50e6349421c,Namespace:tigera-operator,Attempt:0,}" Dec 13 22:58:30.023263 containerd[1608]: time="2025-12-13T22:58:30.023217117Z" level=info msg="connecting to shim a76af699cec4cfe2edec104a6a9c810ae80b605e77ea97f4e7b8665ec8c3bde6" address="unix:///run/containerd/s/ea166df2ca0eb8e023d42a9fea42c7a7666367ac3588091bae029faf3eb15a48" namespace=k8s.io protocol=ttrpc version=3 Dec 13 22:58:30.045777 systemd[1]: Started cri-containerd-a76af699cec4cfe2edec104a6a9c810ae80b605e77ea97f4e7b8665ec8c3bde6.scope - libcontainer container a76af699cec4cfe2edec104a6a9c810ae80b605e77ea97f4e7b8665ec8c3bde6. Dec 13 22:58:30.055000 audit: BPF prog-id=145 op=LOAD Dec 13 22:58:30.056000 audit: BPF prog-id=146 op=LOAD Dec 13 22:58:30.056000 audit[2940]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=2929 pid=2940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137366166363939636563346366653265646563313034613661396338 Dec 13 22:58:30.056000 audit: BPF prog-id=146 op=UNLOAD Dec 13 22:58:30.056000 audit[2940]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2929 pid=2940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137366166363939636563346366653265646563313034613661396338 Dec 13 22:58:30.058822 kubelet[2752]: E1213 22:58:30.058735 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:30.059042 kubelet[2752]: E1213 22:58:30.059016 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:30.057000 audit: BPF prog-id=147 op=LOAD Dec 13 22:58:30.057000 audit[2940]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=2929 pid=2940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.057000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137366166363939636563346366653265646563313034613661396338 Dec 13 22:58:30.057000 audit: BPF prog-id=148 op=LOAD Dec 13 22:58:30.057000 audit[2940]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=2929 pid=2940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.057000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137366166363939636563346366653265646563313034613661396338 Dec 13 22:58:30.057000 audit: BPF prog-id=148 op=UNLOAD Dec 13 22:58:30.057000 audit[2940]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2929 pid=2940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.057000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137366166363939636563346366653265646563313034613661396338 Dec 13 22:58:30.057000 audit: BPF prog-id=147 op=UNLOAD Dec 13 22:58:30.057000 audit[2940]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2929 pid=2940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.057000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137366166363939636563346366653265646563313034613661396338 Dec 13 22:58:30.058000 audit[2959]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=2959 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:30.058000 audit[2959]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffe2b0cce0 a2=0 a3=1 items=0 ppid=2863 pid=2959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.058000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 13 22:58:30.057000 audit: BPF prog-id=149 op=LOAD Dec 13 22:58:30.057000 audit[2940]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=2929 pid=2940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.057000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137366166363939636563346366653265646563313034613661396338 Dec 13 22:58:30.063000 audit[2961]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=2961 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:30.063000 audit[2961]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff2499330 a2=0 a3=1 items=0 ppid=2863 pid=2961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.063000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 13 22:58:30.067000 audit[2964]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=2964 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:30.067000 audit[2964]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff62b4040 a2=0 a3=1 items=0 ppid=2863 pid=2964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.067000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 13 22:58:30.069000 audit[2965]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=2965 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:30.069000 audit[2965]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd3819020 a2=0 a3=1 items=0 ppid=2863 pid=2965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.069000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 13 22:58:30.072000 audit[2967]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=2967 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:30.072000 audit[2967]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcb5489d0 a2=0 a3=1 items=0 ppid=2863 pid=2967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.072000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 13 22:58:30.076000 audit[2968]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=2968 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:30.076000 audit[2968]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffdc25530 a2=0 a3=1 items=0 ppid=2863 pid=2968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.076000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 13 22:58:30.080000 audit[2970]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=2970 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:30.080000 audit[2970]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffefa69760 a2=0 a3=1 items=0 ppid=2863 pid=2970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.080000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 13 22:58:30.090000 audit[2974]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=2974 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:30.090000 audit[2974]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd9b45aa0 a2=0 a3=1 items=0 ppid=2863 pid=2974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.090000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 13 22:58:30.093008 kubelet[2752]: I1213 22:58:30.092777 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6bxm7" podStartSLOduration=1.092757197 podStartE2EDuration="1.092757197s" podCreationTimestamp="2025-12-13 22:58:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:58:30.080834917 +0000 UTC m=+7.194595401" watchObservedRunningTime="2025-12-13 22:58:30.092757197 +0000 UTC m=+7.206517681" Dec 13 22:58:30.092000 audit[2975]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=2975 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:30.092000 audit[2975]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc6760030 a2=0 a3=1 items=0 ppid=2863 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.092000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 13 22:58:30.095000 audit[2982]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=2982 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:30.095000 audit[2982]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffea12b890 a2=0 a3=1 items=0 ppid=2863 pid=2982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.095000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 13 22:58:30.096000 audit[2983]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=2983 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:30.096000 audit[2983]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd1fb8300 a2=0 a3=1 items=0 ppid=2863 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.096000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 13 22:58:30.100418 containerd[1608]: time="2025-12-13T22:58:30.100378277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-6r94j,Uid:a7ac630a-6575-4c58-8a31-d50e6349421c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a76af699cec4cfe2edec104a6a9c810ae80b605e77ea97f4e7b8665ec8c3bde6\"" Dec 13 22:58:30.100000 audit[2985]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=2985 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:30.100000 audit[2985]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe17ffb40 a2=0 a3=1 items=0 ppid=2863 pid=2985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.100000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 22:58:30.102735 containerd[1608]: time="2025-12-13T22:58:30.102684997Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 13 22:58:30.105000 audit[2988]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=2988 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:30.105000 audit[2988]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd82000d0 a2=0 a3=1 items=0 ppid=2863 pid=2988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.105000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 22:58:30.108000 audit[2991]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=2991 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:30.108000 audit[2991]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc2599c20 a2=0 a3=1 items=0 ppid=2863 pid=2991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.108000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 13 22:58:30.109000 audit[2992]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=2992 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:30.109000 audit[2992]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffde4b7b20 a2=0 a3=1 items=0 ppid=2863 pid=2992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.109000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 13 22:58:30.112000 audit[2994]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=2994 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:30.112000 audit[2994]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffde21da00 a2=0 a3=1 items=0 ppid=2863 pid=2994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.112000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 22:58:30.116000 audit[2997]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=2997 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:30.116000 audit[2997]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcc2c9640 a2=0 a3=1 items=0 ppid=2863 pid=2997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.116000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 22:58:30.117000 audit[2998]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=2998 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:30.117000 audit[2998]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff6afa610 a2=0 a3=1 items=0 ppid=2863 pid=2998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.117000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 13 22:58:30.119000 audit[3000]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3000 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 22:58:30.119000 audit[3000]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffc7e31880 a2=0 a3=1 items=0 ppid=2863 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.119000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 13 22:58:30.137000 audit[3006]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3006 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:58:30.137000 audit[3006]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc7a0cea0 a2=0 a3=1 items=0 ppid=2863 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.137000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:58:30.145000 audit[3006]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3006 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:58:30.145000 audit[3006]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffc7a0cea0 a2=0 a3=1 items=0 ppid=2863 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.145000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:58:30.147000 audit[3011]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3011 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:30.147000 audit[3011]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffcb457880 a2=0 a3=1 items=0 ppid=2863 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.147000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 13 22:58:30.149000 audit[3013]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3013 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:30.149000 audit[3013]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffd01cd6e0 a2=0 a3=1 items=0 ppid=2863 pid=3013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.149000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 13 22:58:30.153000 audit[3016]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3016 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:30.153000 audit[3016]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffffca128b0 a2=0 a3=1 items=0 ppid=2863 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.153000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 13 22:58:30.154000 audit[3017]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3017 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:30.154000 audit[3017]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd54d48f0 a2=0 a3=1 items=0 ppid=2863 pid=3017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.154000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 13 22:58:30.157000 audit[3019]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3019 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:30.157000 audit[3019]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff1c8a560 a2=0 a3=1 items=0 ppid=2863 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.157000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 13 22:58:30.158000 audit[3020]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3020 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:30.158000 audit[3020]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffee8114c0 a2=0 a3=1 items=0 ppid=2863 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.158000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 13 22:58:30.160000 audit[3022]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3022 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:30.160000 audit[3022]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd1fed940 a2=0 a3=1 items=0 ppid=2863 pid=3022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.160000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 13 22:58:30.164000 audit[3025]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3025 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:30.164000 audit[3025]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffe7e86ca0 a2=0 a3=1 items=0 ppid=2863 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.164000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 13 22:58:30.165000 audit[3026]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3026 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:30.165000 audit[3026]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdda24fd0 a2=0 a3=1 items=0 ppid=2863 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.165000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 13 22:58:30.167000 audit[3028]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3028 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:30.167000 audit[3028]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff99be380 a2=0 a3=1 items=0 ppid=2863 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.167000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 13 22:58:30.168000 audit[3029]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3029 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:30.168000 audit[3029]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd269b560 a2=0 a3=1 items=0 ppid=2863 pid=3029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.168000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 13 22:58:30.171000 audit[3031]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3031 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:30.171000 audit[3031]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe13d5e10 a2=0 a3=1 items=0 ppid=2863 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.171000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 22:58:30.174000 audit[3034]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3034 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:30.174000 audit[3034]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffeed15630 a2=0 a3=1 items=0 ppid=2863 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.174000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 13 22:58:30.179000 audit[3037]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3037 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:30.179000 audit[3037]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc3a3e2c0 a2=0 a3=1 items=0 ppid=2863 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.179000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 13 22:58:30.180000 audit[3038]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3038 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:30.180000 audit[3038]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc5206160 a2=0 a3=1 items=0 ppid=2863 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.180000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 13 22:58:30.183000 audit[3040]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3040 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:30.183000 audit[3040]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffff992ec00 a2=0 a3=1 items=0 ppid=2863 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.183000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 22:58:30.186000 audit[3043]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3043 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:30.186000 audit[3043]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd0615230 a2=0 a3=1 items=0 ppid=2863 pid=3043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.186000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 22:58:30.187000 audit[3044]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3044 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:30.187000 audit[3044]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff4220490 a2=0 a3=1 items=0 ppid=2863 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.187000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 13 22:58:30.190000 audit[3046]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3046 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:30.190000 audit[3046]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffc4f6d150 a2=0 a3=1 items=0 ppid=2863 pid=3046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.190000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 13 22:58:30.191000 audit[3047]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3047 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:30.191000 audit[3047]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd1346100 a2=0 a3=1 items=0 ppid=2863 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.191000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 13 22:58:30.193000 audit[3049]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3049 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:30.193000 audit[3049]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc88d8ba0 a2=0 a3=1 items=0 ppid=2863 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.193000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 22:58:30.196000 audit[3052]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3052 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 22:58:30.196000 audit[3052]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff7296830 a2=0 a3=1 items=0 ppid=2863 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.196000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 22:58:30.200000 audit[3054]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3054 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 13 22:58:30.200000 audit[3054]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffe53d6ed0 a2=0 a3=1 items=0 ppid=2863 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.200000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:58:30.200000 audit[3054]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3054 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 13 22:58:30.200000 audit[3054]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffe53d6ed0 a2=0 a3=1 items=0 ppid=2863 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:30.200000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:58:31.061926 kubelet[2752]: E1213 22:58:31.061878 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:31.855668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount313280586.mount: Deactivated successfully. Dec 13 22:58:32.157729 kubelet[2752]: E1213 22:58:32.157616 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:33.069813 kubelet[2752]: E1213 22:58:33.069762 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:33.725622 containerd[1608]: time="2025-12-13T22:58:33.725577472Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 22:58:33.726273 containerd[1608]: time="2025-12-13T22:58:33.726233904Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Dec 13 22:58:33.727155 containerd[1608]: time="2025-12-13T22:58:33.727100813Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 22:58:33.729210 containerd[1608]: time="2025-12-13T22:58:33.729178626Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 22:58:33.730340 containerd[1608]: time="2025-12-13T22:58:33.730243692Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 3.627429695s" Dec 13 22:58:33.730340 containerd[1608]: time="2025-12-13T22:58:33.730272492Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 13 22:58:33.736319 containerd[1608]: time="2025-12-13T22:58:33.736204696Z" level=info msg="CreateContainer within sandbox \"a76af699cec4cfe2edec104a6a9c810ae80b605e77ea97f4e7b8665ec8c3bde6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 22:58:33.743211 containerd[1608]: time="2025-12-13T22:58:33.742682492Z" level=info msg="Container 4b1e2df76963ca5be95fc7e538f5debcc50c2f54cbe5af14bbed91a9cc24ec9a: CDI devices from CRI Config.CDIDevices: []" Dec 13 22:58:33.747038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3458305067.mount: Deactivated successfully. Dec 13 22:58:33.751269 containerd[1608]: time="2025-12-13T22:58:33.751185743Z" level=info msg="CreateContainer within sandbox \"a76af699cec4cfe2edec104a6a9c810ae80b605e77ea97f4e7b8665ec8c3bde6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4b1e2df76963ca5be95fc7e538f5debcc50c2f54cbe5af14bbed91a9cc24ec9a\"" Dec 13 22:58:33.752581 containerd[1608]: time="2025-12-13T22:58:33.752267569Z" level=info msg="StartContainer for \"4b1e2df76963ca5be95fc7e538f5debcc50c2f54cbe5af14bbed91a9cc24ec9a\"" Dec 13 22:58:33.753233 containerd[1608]: time="2025-12-13T22:58:33.753193397Z" level=info msg="connecting to shim 4b1e2df76963ca5be95fc7e538f5debcc50c2f54cbe5af14bbed91a9cc24ec9a" address="unix:///run/containerd/s/ea166df2ca0eb8e023d42a9fea42c7a7666367ac3588091bae029faf3eb15a48" protocol=ttrpc version=3 Dec 13 22:58:33.782784 systemd[1]: Started cri-containerd-4b1e2df76963ca5be95fc7e538f5debcc50c2f54cbe5af14bbed91a9cc24ec9a.scope - libcontainer container 4b1e2df76963ca5be95fc7e538f5debcc50c2f54cbe5af14bbed91a9cc24ec9a. Dec 13 22:58:33.792000 audit: BPF prog-id=150 op=LOAD Dec 13 22:58:33.793000 audit: BPF prog-id=151 op=LOAD Dec 13 22:58:33.793000 audit[3065]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2929 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:33.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462316532646637363936336361356265393566633765353338663564 Dec 13 22:58:33.793000 audit: BPF prog-id=151 op=UNLOAD Dec 13 22:58:33.793000 audit[3065]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2929 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:33.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462316532646637363936336361356265393566633765353338663564 Dec 13 22:58:33.793000 audit: BPF prog-id=152 op=LOAD Dec 13 22:58:33.793000 audit[3065]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2929 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:33.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462316532646637363936336361356265393566633765353338663564 Dec 13 22:58:33.793000 audit: BPF prog-id=153 op=LOAD Dec 13 22:58:33.793000 audit[3065]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2929 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:33.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462316532646637363936336361356265393566633765353338663564 Dec 13 22:58:33.793000 audit: BPF prog-id=153 op=UNLOAD Dec 13 22:58:33.793000 audit[3065]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2929 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:33.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462316532646637363936336361356265393566633765353338663564 Dec 13 22:58:33.793000 audit: BPF prog-id=152 op=UNLOAD Dec 13 22:58:33.793000 audit[3065]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2929 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:33.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462316532646637363936336361356265393566633765353338663564 Dec 13 22:58:33.793000 audit: BPF prog-id=154 op=LOAD Dec 13 22:58:33.793000 audit[3065]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2929 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:33.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462316532646637363936336361356265393566633765353338663564 Dec 13 22:58:33.810541 containerd[1608]: time="2025-12-13T22:58:33.810439780Z" level=info msg="StartContainer for \"4b1e2df76963ca5be95fc7e538f5debcc50c2f54cbe5af14bbed91a9cc24ec9a\" returns successfully" Dec 13 22:58:34.082110 kubelet[2752]: I1213 22:58:34.082049 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-6r94j" podStartSLOduration=1.450436571 podStartE2EDuration="5.082032867s" podCreationTimestamp="2025-12-13 22:58:29 +0000 UTC" firstStartedPulling="2025-12-13 22:58:30.101685677 +0000 UTC m=+7.215446121" lastFinishedPulling="2025-12-13 22:58:33.733282013 +0000 UTC m=+10.847042417" observedRunningTime="2025-12-13 22:58:34.081686632 +0000 UTC m=+11.195447076" watchObservedRunningTime="2025-12-13 22:58:34.082032867 +0000 UTC m=+11.195793311" Dec 13 22:58:36.755621 kubelet[2752]: E1213 22:58:36.755494 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:39.084363 sudo[1823]: pam_unix(sudo:session): session closed for user root Dec 13 22:58:39.082000 audit[1823]: USER_END pid=1823 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 22:58:39.085047 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 13 22:58:39.085286 kernel: audit: type=1106 audit(1765666719.082:515): pid=1823 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 22:58:39.082000 audit[1823]: CRED_DISP pid=1823 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 22:58:39.090323 kernel: audit: type=1104 audit(1765666719.082:516): pid=1823 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 22:58:39.093508 sshd[1822]: Connection closed by 10.0.0.1 port 44764 Dec 13 22:58:39.093802 sshd-session[1818]: pam_unix(sshd:session): session closed for user core Dec 13 22:58:39.094000 audit[1818]: USER_END pid=1818 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:58:39.094000 audit[1818]: CRED_DISP pid=1818 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:58:39.103132 kernel: audit: type=1106 audit(1765666719.094:517): pid=1818 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:58:39.103204 kernel: audit: type=1104 audit(1765666719.094:518): pid=1818 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:58:39.103225 kernel: audit: type=1131 audit(1765666719.099:519): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.10:22-10.0.0.1:44764 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:58:39.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.10:22-10.0.0.1:44764 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:58:39.101641 systemd[1]: sshd@6-10.0.0.10:22-10.0.0.1:44764.service: Deactivated successfully. Dec 13 22:58:39.104237 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 22:58:39.104672 systemd[1]: session-8.scope: Consumed 6.010s CPU time, 192.4M memory peak. Dec 13 22:58:39.106065 systemd-logind[1585]: Session 8 logged out. Waiting for processes to exit. Dec 13 22:58:39.109978 systemd-logind[1585]: Removed session 8. Dec 13 22:58:41.837000 audit[3156]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3156 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:58:41.841589 kernel: audit: type=1325 audit(1765666721.837:520): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3156 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:58:41.841676 kernel: audit: type=1300 audit(1765666721.837:520): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffe762d720 a2=0 a3=1 items=0 ppid=2863 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:41.837000 audit[3156]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffe762d720 a2=0 a3=1 items=0 ppid=2863 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:41.837000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:58:41.847295 kernel: audit: type=1327 audit(1765666721.837:520): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:58:41.849060 kernel: audit: type=1325 audit(1765666721.847:521): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3156 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:58:41.847000 audit[3156]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3156 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:58:41.847000 audit[3156]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe762d720 a2=0 a3=1 items=0 ppid=2863 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:41.847000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:58:41.858691 kernel: audit: type=1300 audit(1765666721.847:521): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe762d720 a2=0 a3=1 items=0 ppid=2863 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:41.871000 audit[3158]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3158 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:58:41.871000 audit[3158]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffff56940b0 a2=0 a3=1 items=0 ppid=2863 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:41.871000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:58:41.878000 audit[3158]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3158 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:58:41.878000 audit[3158]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff56940b0 a2=0 a3=1 items=0 ppid=2863 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:41.878000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:58:42.988757 update_engine[1588]: I20251213 22:58:42.988584 1588 update_attempter.cc:509] Updating boot flags... Dec 13 22:58:45.646000 audit[3179]: NETFILTER_CFG table=filter:109 family=2 entries=16 op=nft_register_rule pid=3179 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:58:45.649089 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 13 22:58:45.649145 kernel: audit: type=1325 audit(1765666725.646:524): table=filter:109 family=2 entries=16 op=nft_register_rule pid=3179 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:58:45.646000 audit[3179]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffe1e78a10 a2=0 a3=1 items=0 ppid=2863 pid=3179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:45.656042 kernel: audit: type=1300 audit(1765666725.646:524): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffe1e78a10 a2=0 a3=1 items=0 ppid=2863 pid=3179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:45.646000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:58:45.658398 kernel: audit: type=1327 audit(1765666725.646:524): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:58:45.660000 audit[3179]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3179 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:58:45.664576 kernel: audit: type=1325 audit(1765666725.660:525): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3179 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:58:45.660000 audit[3179]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe1e78a10 a2=0 a3=1 items=0 ppid=2863 pid=3179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:45.660000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:58:45.669739 kernel: audit: type=1300 audit(1765666725.660:525): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe1e78a10 a2=0 a3=1 items=0 ppid=2863 pid=3179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:45.669800 kernel: audit: type=1327 audit(1765666725.660:525): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:58:45.678000 audit[3181]: NETFILTER_CFG table=filter:111 family=2 entries=17 op=nft_register_rule pid=3181 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:58:45.678000 audit[3181]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff54f8360 a2=0 a3=1 items=0 ppid=2863 pid=3181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:45.684861 kernel: audit: type=1325 audit(1765666725.678:526): table=filter:111 family=2 entries=17 op=nft_register_rule pid=3181 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:58:45.684937 kernel: audit: type=1300 audit(1765666725.678:526): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff54f8360 a2=0 a3=1 items=0 ppid=2863 pid=3181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:45.678000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:58:45.687583 kernel: audit: type=1327 audit(1765666725.678:526): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:58:45.686000 audit[3181]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3181 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:58:45.686000 audit[3181]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff54f8360 a2=0 a3=1 items=0 ppid=2863 pid=3181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:45.686000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:58:45.690587 kernel: audit: type=1325 audit(1765666725.686:527): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3181 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:58:46.699000 audit[3183]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3183 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:58:46.699000 audit[3183]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe7abed30 a2=0 a3=1 items=0 ppid=2863 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:46.699000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:58:46.706000 audit[3183]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3183 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:58:46.706000 audit[3183]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe7abed30 a2=0 a3=1 items=0 ppid=2863 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:46.706000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:58:47.918000 audit[3185]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3185 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:58:47.918000 audit[3185]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffde4fd090 a2=0 a3=1 items=0 ppid=2863 pid=3185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:47.918000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:58:47.927000 audit[3185]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3185 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:58:47.927000 audit[3185]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffde4fd090 a2=0 a3=1 items=0 ppid=2863 pid=3185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:47.927000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:58:47.966639 systemd[1]: Created slice kubepods-besteffort-pod390dc826_219d_4707_a74f_5c61ef63d73d.slice - libcontainer container kubepods-besteffort-pod390dc826_219d_4707_a74f_5c61ef63d73d.slice. Dec 13 22:58:48.043608 kubelet[2752]: I1213 22:58:48.043541 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/390dc826-219d-4707-a74f-5c61ef63d73d-tigera-ca-bundle\") pod \"calico-typha-c787844d-mz2cb\" (UID: \"390dc826-219d-4707-a74f-5c61ef63d73d\") " pod="calico-system/calico-typha-c787844d-mz2cb" Dec 13 22:58:48.043608 kubelet[2752]: I1213 22:58:48.043616 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/390dc826-219d-4707-a74f-5c61ef63d73d-typha-certs\") pod \"calico-typha-c787844d-mz2cb\" (UID: \"390dc826-219d-4707-a74f-5c61ef63d73d\") " pod="calico-system/calico-typha-c787844d-mz2cb" Dec 13 22:58:48.043997 kubelet[2752]: I1213 22:58:48.043654 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rvj8\" (UniqueName: \"kubernetes.io/projected/390dc826-219d-4707-a74f-5c61ef63d73d-kube-api-access-9rvj8\") pod \"calico-typha-c787844d-mz2cb\" (UID: \"390dc826-219d-4707-a74f-5c61ef63d73d\") " pod="calico-system/calico-typha-c787844d-mz2cb" Dec 13 22:58:48.164194 systemd[1]: Created slice kubepods-besteffort-pod9f9aa7ab_242d_405f_baa8_53132bd8b67d.slice - libcontainer container kubepods-besteffort-pod9f9aa7ab_242d_405f_baa8_53132bd8b67d.slice. Dec 13 22:58:48.245131 kubelet[2752]: I1213 22:58:48.244927 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9f9aa7ab-242d-405f-baa8-53132bd8b67d-cni-net-dir\") pod \"calico-node-r98vd\" (UID: \"9f9aa7ab-242d-405f-baa8-53132bd8b67d\") " pod="calico-system/calico-node-r98vd" Dec 13 22:58:48.245131 kubelet[2752]: I1213 22:58:48.245060 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9f9aa7ab-242d-405f-baa8-53132bd8b67d-flexvol-driver-host\") pod \"calico-node-r98vd\" (UID: \"9f9aa7ab-242d-405f-baa8-53132bd8b67d\") " pod="calico-system/calico-node-r98vd" Dec 13 22:58:48.245131 kubelet[2752]: I1213 22:58:48.245089 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9f9aa7ab-242d-405f-baa8-53132bd8b67d-cni-log-dir\") pod \"calico-node-r98vd\" (UID: \"9f9aa7ab-242d-405f-baa8-53132bd8b67d\") " pod="calico-system/calico-node-r98vd" Dec 13 22:58:48.245447 kubelet[2752]: I1213 22:58:48.245303 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9f9aa7ab-242d-405f-baa8-53132bd8b67d-node-certs\") pod \"calico-node-r98vd\" (UID: \"9f9aa7ab-242d-405f-baa8-53132bd8b67d\") " pod="calico-system/calico-node-r98vd" Dec 13 22:58:48.245447 kubelet[2752]: I1213 22:58:48.245346 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9f9aa7ab-242d-405f-baa8-53132bd8b67d-policysync\") pod \"calico-node-r98vd\" (UID: \"9f9aa7ab-242d-405f-baa8-53132bd8b67d\") " pod="calico-system/calico-node-r98vd" Dec 13 22:58:48.245447 kubelet[2752]: I1213 22:58:48.245377 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql94g\" (UniqueName: \"kubernetes.io/projected/9f9aa7ab-242d-405f-baa8-53132bd8b67d-kube-api-access-ql94g\") pod \"calico-node-r98vd\" (UID: \"9f9aa7ab-242d-405f-baa8-53132bd8b67d\") " pod="calico-system/calico-node-r98vd" Dec 13 22:58:48.245447 kubelet[2752]: I1213 22:58:48.245399 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9f9aa7ab-242d-405f-baa8-53132bd8b67d-var-run-calico\") pod \"calico-node-r98vd\" (UID: \"9f9aa7ab-242d-405f-baa8-53132bd8b67d\") " pod="calico-system/calico-node-r98vd" Dec 13 22:58:48.245447 kubelet[2752]: I1213 22:58:48.245414 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f9aa7ab-242d-405f-baa8-53132bd8b67d-xtables-lock\") pod \"calico-node-r98vd\" (UID: \"9f9aa7ab-242d-405f-baa8-53132bd8b67d\") " pod="calico-system/calico-node-r98vd" Dec 13 22:58:48.245585 kubelet[2752]: I1213 22:58:48.245433 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9f9aa7ab-242d-405f-baa8-53132bd8b67d-cni-bin-dir\") pod \"calico-node-r98vd\" (UID: \"9f9aa7ab-242d-405f-baa8-53132bd8b67d\") " pod="calico-system/calico-node-r98vd" Dec 13 22:58:48.245746 kubelet[2752]: I1213 22:58:48.245646 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f9aa7ab-242d-405f-baa8-53132bd8b67d-lib-modules\") pod \"calico-node-r98vd\" (UID: \"9f9aa7ab-242d-405f-baa8-53132bd8b67d\") " pod="calico-system/calico-node-r98vd" Dec 13 22:58:48.245746 kubelet[2752]: I1213 22:58:48.245674 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f9aa7ab-242d-405f-baa8-53132bd8b67d-tigera-ca-bundle\") pod \"calico-node-r98vd\" (UID: \"9f9aa7ab-242d-405f-baa8-53132bd8b67d\") " pod="calico-system/calico-node-r98vd" Dec 13 22:58:48.245746 kubelet[2752]: I1213 22:58:48.245702 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9f9aa7ab-242d-405f-baa8-53132bd8b67d-var-lib-calico\") pod \"calico-node-r98vd\" (UID: \"9f9aa7ab-242d-405f-baa8-53132bd8b67d\") " pod="calico-system/calico-node-r98vd" Dec 13 22:58:48.270128 kubelet[2752]: E1213 22:58:48.270090 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:48.271127 containerd[1608]: time="2025-12-13T22:58:48.270741926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c787844d-mz2cb,Uid:390dc826-219d-4707-a74f-5c61ef63d73d,Namespace:calico-system,Attempt:0,}" Dec 13 22:58:48.301304 containerd[1608]: time="2025-12-13T22:58:48.301244657Z" level=info msg="connecting to shim d8f56b69c0b959530bf9c6bcaf9448c3f1c2d35ebe106c4e69c07e5070095321" address="unix:///run/containerd/s/961f59da52e172e5b597f6e4a7ca4fb5e3be2b791e009a1d55bdb37510efc9e4" namespace=k8s.io protocol=ttrpc version=3 Dec 13 22:58:48.328211 kubelet[2752]: E1213 22:58:48.328158 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8dxl2" podUID="e6e20487-ee64-4317-b075-5244d40e7b5a" Dec 13 22:58:48.346050 kubelet[2752]: I1213 22:58:48.345996 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6e20487-ee64-4317-b075-5244d40e7b5a-kubelet-dir\") pod \"csi-node-driver-8dxl2\" (UID: \"e6e20487-ee64-4317-b075-5244d40e7b5a\") " pod="calico-system/csi-node-driver-8dxl2" Dec 13 22:58:48.346193 kubelet[2752]: I1213 22:58:48.346078 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e6e20487-ee64-4317-b075-5244d40e7b5a-registration-dir\") pod \"csi-node-driver-8dxl2\" (UID: \"e6e20487-ee64-4317-b075-5244d40e7b5a\") " pod="calico-system/csi-node-driver-8dxl2" Dec 13 22:58:48.346193 kubelet[2752]: I1213 22:58:48.346097 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e6e20487-ee64-4317-b075-5244d40e7b5a-socket-dir\") pod \"csi-node-driver-8dxl2\" (UID: \"e6e20487-ee64-4317-b075-5244d40e7b5a\") " pod="calico-system/csi-node-driver-8dxl2" Dec 13 22:58:48.346193 kubelet[2752]: I1213 22:58:48.346136 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e6e20487-ee64-4317-b075-5244d40e7b5a-varrun\") pod \"csi-node-driver-8dxl2\" (UID: \"e6e20487-ee64-4317-b075-5244d40e7b5a\") " pod="calico-system/csi-node-driver-8dxl2" Dec 13 22:58:48.346193 kubelet[2752]: I1213 22:58:48.346171 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxbwv\" (UniqueName: \"kubernetes.io/projected/e6e20487-ee64-4317-b075-5244d40e7b5a-kube-api-access-vxbwv\") pod \"csi-node-driver-8dxl2\" (UID: \"e6e20487-ee64-4317-b075-5244d40e7b5a\") " pod="calico-system/csi-node-driver-8dxl2" Dec 13 22:58:48.348179 kubelet[2752]: E1213 22:58:48.348130 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:48.348179 kubelet[2752]: W1213 22:58:48.348158 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:48.350210 systemd[1]: Started cri-containerd-d8f56b69c0b959530bf9c6bcaf9448c3f1c2d35ebe106c4e69c07e5070095321.scope - libcontainer container d8f56b69c0b959530bf9c6bcaf9448c3f1c2d35ebe106c4e69c07e5070095321. Dec 13 22:58:48.354069 kubelet[2752]: E1213 22:58:48.351578 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:48.354069 kubelet[2752]: W1213 22:58:48.351607 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:48.354905 kubelet[2752]: E1213 22:58:48.354857 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:48.354905 kubelet[2752]: W1213 22:58:48.354876 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:48.357175 kubelet[2752]: E1213 22:58:48.356749 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:48.359735 kubelet[2752]: E1213 22:58:48.359692 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:48.360866 kubelet[2752]: E1213 22:58:48.360718 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:48.360866 kubelet[2752]: W1213 22:58:48.360740 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:48.360866 kubelet[2752]: E1213 22:58:48.360762 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:48.361790 kubelet[2752]: E1213 22:58:48.361760 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:48.361790 kubelet[2752]: W1213 22:58:48.361776 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:48.361895 kubelet[2752]: E1213 22:58:48.361796 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:48.363455 kubelet[2752]: E1213 22:58:48.363087 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:48.373318 kubelet[2752]: E1213 22:58:48.373276 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:48.373318 kubelet[2752]: W1213 22:58:48.373310 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:48.373441 kubelet[2752]: E1213 22:58:48.373398 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:48.373714 kubelet[2752]: E1213 22:58:48.373693 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:48.373714 kubelet[2752]: W1213 22:58:48.373708 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:48.373860 kubelet[2752]: E1213 22:58:48.373726 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:48.374063 kubelet[2752]: E1213 22:58:48.373905 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:48.374063 kubelet[2752]: W1213 22:58:48.373915 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:48.374063 kubelet[2752]: E1213 22:58:48.373969 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:48.373000 audit: BPF prog-id=155 op=LOAD Dec 13 22:58:48.375753 kubelet[2752]: E1213 22:58:48.374158 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:48.375753 kubelet[2752]: W1213 22:58:48.374168 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:48.375753 kubelet[2752]: E1213 22:58:48.374178 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:48.376000 audit: BPF prog-id=156 op=LOAD Dec 13 22:58:48.376000 audit[3209]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=3198 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:48.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438663536623639633062393539353330626639633662636166393434 Dec 13 22:58:48.376000 audit: BPF prog-id=156 op=UNLOAD Dec 13 22:58:48.376000 audit[3209]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3198 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:48.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438663536623639633062393539353330626639633662636166393434 Dec 13 22:58:48.376000 audit: BPF prog-id=157 op=LOAD Dec 13 22:58:48.376000 audit[3209]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=3198 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:48.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438663536623639633062393539353330626639633662636166393434 Dec 13 22:58:48.376000 audit: BPF prog-id=158 op=LOAD Dec 13 22:58:48.376000 audit[3209]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=3198 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:48.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438663536623639633062393539353330626639633662636166393434 Dec 13 22:58:48.376000 audit: BPF prog-id=158 op=UNLOAD Dec 13 22:58:48.376000 audit[3209]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3198 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:48.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438663536623639633062393539353330626639633662636166393434 Dec 13 22:58:48.376000 audit: BPF prog-id=157 op=UNLOAD Dec 13 22:58:48.376000 audit[3209]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3198 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:48.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438663536623639633062393539353330626639633662636166393434 Dec 13 22:58:48.376000 audit: BPF prog-id=159 op=LOAD Dec 13 22:58:48.376000 audit[3209]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=3198 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:48.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438663536623639633062393539353330626639633662636166393434 Dec 13 22:58:48.410090 containerd[1608]: time="2025-12-13T22:58:48.410039845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c787844d-mz2cb,Uid:390dc826-219d-4707-a74f-5c61ef63d73d,Namespace:calico-system,Attempt:0,} returns sandbox id \"d8f56b69c0b959530bf9c6bcaf9448c3f1c2d35ebe106c4e69c07e5070095321\"" Dec 13 22:58:48.419251 kubelet[2752]: E1213 22:58:48.418940 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:48.430966 containerd[1608]: time="2025-12-13T22:58:48.430910102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 13 22:58:48.446650 kubelet[2752]: E1213 22:58:48.446619 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:48.446650 kubelet[2752]: W1213 22:58:48.446642 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:48.446805 kubelet[2752]: E1213 22:58:48.446662 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:48.446847 kubelet[2752]: E1213 22:58:48.446834 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:48.446847 kubelet[2752]: W1213 22:58:48.446846 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:48.446896 kubelet[2752]: E1213 22:58:48.446864 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:48.447077 kubelet[2752]: E1213 22:58:48.447063 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:48.447077 kubelet[2752]: W1213 22:58:48.447074 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:48.447137 kubelet[2752]: E1213 22:58:48.447087 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:48.447345 kubelet[2752]: E1213 22:58:48.447330 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:48.447380 kubelet[2752]: W1213 22:58:48.447345 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:48.447380 kubelet[2752]: E1213 22:58:48.447360 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:48.447538 kubelet[2752]: E1213 22:58:48.447526 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:48.447538 kubelet[2752]: W1213 22:58:48.447538 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:48.447610 kubelet[2752]: E1213 22:58:48.447560 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:48.447818 kubelet[2752]: E1213 22:58:48.447748 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:48.447818 kubelet[2752]: W1213 22:58:48.447795 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:48.447818 kubelet[2752]: E1213 22:58:48.447812 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:48.447969 kubelet[2752]: E1213 22:58:48.447956 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:48.447969 kubelet[2752]: W1213 22:58:48.447966 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:48.448065 kubelet[2752]: E1213 22:58:48.448051 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:48.448108 kubelet[2752]: E1213 22:58:48.448099 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:48.448108 kubelet[2752]: W1213 22:58:48.448108 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:48.448160 kubelet[2752]: E1213 22:58:48.448132 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:48.448249 kubelet[2752]: E1213 22:58:48.448240 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:48.448297 kubelet[2752]: W1213 22:58:48.448249 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:48.448297 kubelet[2752]: E1213 22:58:48.448269 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:48.448403 kubelet[2752]: E1213 22:58:48.448391 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:48.448403 kubelet[2752]: W1213 22:58:48.448401 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:48.448403 kubelet[2752]: E1213 22:58:48.448421 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:48.448545 kubelet[2752]: E1213 22:58:48.448533 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:48.448545 kubelet[2752]: W1213 22:58:48.448543 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:48.448619 kubelet[2752]: E1213 22:58:48.448565 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:48.448732 kubelet[2752]: E1213 22:58:48.448716 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:48.448732 kubelet[2752]: W1213 22:58:48.448727 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:48.448781 kubelet[2752]: E1213 22:58:48.448742 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:48.448955 kubelet[2752]: E1213 22:58:48.448938 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:48.448955 kubelet[2752]: W1213 22:58:48.448951 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:48.449013 kubelet[2752]: E1213 22:58:48.448966 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:48.449197 kubelet[2752]: E1213 22:58:48.449168 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:48.449197 kubelet[2752]: W1213 22:58:48.449180 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:48.449278 kubelet[2752]: E1213 22:58:48.449259 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:48.449353 kubelet[2752]: E1213 22:58:48.449318 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:48.449353 kubelet[2752]: W1213 22:58:48.449334 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:48.449470 kubelet[2752]: E1213 22:58:48.449391 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:48.449544 kubelet[2752]: E1213 22:58:48.449517 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:48.449544 kubelet[2752]: W1213 22:58:48.449525 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:48.449626 kubelet[2752]: E1213 22:58:48.449547 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:48.449703 kubelet[2752]: E1213 22:58:48.449688 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:48.449703 kubelet[2752]: W1213 22:58:48.449699 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:48.449807 kubelet[2752]: E1213 22:58:48.449733 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:48.449846 kubelet[2752]: E1213 22:58:48.449833 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:48.449846 kubelet[2752]: W1213 22:58:48.449844 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:48.449901 kubelet[2752]: E1213 22:58:48.449884 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:48.450058 kubelet[2752]: E1213 22:58:48.450042 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:48.450058 kubelet[2752]: W1213 22:58:48.450055 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:48.450123 kubelet[2752]: E1213 22:58:48.450068 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:48.450288 kubelet[2752]: E1213 22:58:48.450276 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:48.450288 kubelet[2752]: W1213 22:58:48.450288 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:48.450345 kubelet[2752]: E1213 22:58:48.450304 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:48.450499 kubelet[2752]: E1213 22:58:48.450487 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:48.450617 kubelet[2752]: W1213 22:58:48.450499 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:48.450617 kubelet[2752]: E1213 22:58:48.450518 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:48.450885 kubelet[2752]: E1213 22:58:48.450871 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:48.450885 kubelet[2752]: W1213 22:58:48.450884 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:48.450946 kubelet[2752]: E1213 22:58:48.450901 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:48.451117 kubelet[2752]: E1213 22:58:48.451106 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:48.451117 kubelet[2752]: W1213 22:58:48.451118 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:48.451230 kubelet[2752]: E1213 22:58:48.451211 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:48.451291 kubelet[2752]: E1213 22:58:48.451271 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:48.451291 kubelet[2752]: W1213 22:58:48.451282 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:48.451291 kubelet[2752]: E1213 22:58:48.451291 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:48.451790 kubelet[2752]: E1213 22:58:48.451773 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:48.451790 kubelet[2752]: W1213 22:58:48.451789 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:48.451897 kubelet[2752]: E1213 22:58:48.451800 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:48.462294 kubelet[2752]: E1213 22:58:48.462271 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:48.462294 kubelet[2752]: W1213 22:58:48.462289 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:48.462427 kubelet[2752]: E1213 22:58:48.462304 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:48.472573 kubelet[2752]: E1213 22:58:48.472457 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:48.472998 containerd[1608]: time="2025-12-13T22:58:48.472964337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-r98vd,Uid:9f9aa7ab-242d-405f-baa8-53132bd8b67d,Namespace:calico-system,Attempt:0,}" Dec 13 22:58:48.491662 containerd[1608]: time="2025-12-13T22:58:48.491529286Z" level=info msg="connecting to shim d926c12f493931dc5bfbfffa44b05adc568fa123b4f6943556c8ff92e54b8328" address="unix:///run/containerd/s/776a40e8aab695a63079e868f98c76ba56b788fe81bf009a8ca6b8886fd11e91" namespace=k8s.io protocol=ttrpc version=3 Dec 13 22:58:48.519783 systemd[1]: Started cri-containerd-d926c12f493931dc5bfbfffa44b05adc568fa123b4f6943556c8ff92e54b8328.scope - libcontainer container d926c12f493931dc5bfbfffa44b05adc568fa123b4f6943556c8ff92e54b8328. Dec 13 22:58:48.528000 audit: BPF prog-id=160 op=LOAD Dec 13 22:58:48.529000 audit: BPF prog-id=161 op=LOAD Dec 13 22:58:48.529000 audit[3300]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3287 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:48.529000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439323663313266343933393331646335626662666666613434623035 Dec 13 22:58:48.529000 audit: BPF prog-id=161 op=UNLOAD Dec 13 22:58:48.529000 audit[3300]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3287 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:48.529000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439323663313266343933393331646335626662666666613434623035 Dec 13 22:58:48.529000 audit: BPF prog-id=162 op=LOAD Dec 13 22:58:48.529000 audit[3300]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3287 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:48.529000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439323663313266343933393331646335626662666666613434623035 Dec 13 22:58:48.529000 audit: BPF prog-id=163 op=LOAD Dec 13 22:58:48.529000 audit[3300]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3287 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:48.529000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439323663313266343933393331646335626662666666613434623035 Dec 13 22:58:48.529000 audit: BPF prog-id=163 op=UNLOAD Dec 13 22:58:48.529000 audit[3300]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3287 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:48.529000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439323663313266343933393331646335626662666666613434623035 Dec 13 22:58:48.529000 audit: BPF prog-id=162 op=UNLOAD Dec 13 22:58:48.529000 audit[3300]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3287 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:48.529000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439323663313266343933393331646335626662666666613434623035 Dec 13 22:58:48.529000 audit: BPF prog-id=164 op=LOAD Dec 13 22:58:48.529000 audit[3300]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3287 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:48.529000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439323663313266343933393331646335626662666666613434623035 Dec 13 22:58:48.549142 containerd[1608]: time="2025-12-13T22:58:48.549101645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-r98vd,Uid:9f9aa7ab-242d-405f-baa8-53132bd8b67d,Namespace:calico-system,Attempt:0,} returns sandbox id \"d926c12f493931dc5bfbfffa44b05adc568fa123b4f6943556c8ff92e54b8328\"" Dec 13 22:58:48.549778 kubelet[2752]: E1213 22:58:48.549757 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:48.946000 audit[3327]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3327 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:58:48.946000 audit[3327]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe9b1cdf0 a2=0 a3=1 items=0 ppid=2863 pid=3327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:48.946000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:58:48.953000 audit[3327]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3327 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:58:48.953000 audit[3327]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe9b1cdf0 a2=0 a3=1 items=0 ppid=2863 pid=3327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:48.953000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:58:49.462753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2549168631.mount: Deactivated successfully. Dec 13 22:58:49.993001 containerd[1608]: time="2025-12-13T22:58:49.992953687Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 22:58:49.993632 containerd[1608]: time="2025-12-13T22:58:49.993583405Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Dec 13 22:58:49.994545 containerd[1608]: time="2025-12-13T22:58:49.994490000Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 22:58:49.996361 containerd[1608]: time="2025-12-13T22:58:49.996296392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 22:58:49.996921 containerd[1608]: time="2025-12-13T22:58:49.996890709Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.565917167s" Dec 13 22:58:49.996971 containerd[1608]: time="2025-12-13T22:58:49.996927309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 13 22:58:49.999017 containerd[1608]: time="2025-12-13T22:58:49.998836900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 13 22:58:50.015426 containerd[1608]: time="2025-12-13T22:58:50.015388669Z" level=info msg="CreateContainer within sandbox \"d8f56b69c0b959530bf9c6bcaf9448c3f1c2d35ebe106c4e69c07e5070095321\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 22:58:50.022747 containerd[1608]: time="2025-12-13T22:58:50.022712957Z" level=info msg="Container e187b44f45cc7dfafb42ca2ce06c0bed739005de32ca430834d112e4f4b86c66: CDI devices from CRI Config.CDIDevices: []" Dec 13 22:58:50.022837 kubelet[2752]: E1213 22:58:50.022735 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8dxl2" podUID="e6e20487-ee64-4317-b075-5244d40e7b5a" Dec 13 22:58:50.031889 containerd[1608]: time="2025-12-13T22:58:50.031823078Z" level=info msg="CreateContainer within sandbox \"d8f56b69c0b959530bf9c6bcaf9448c3f1c2d35ebe106c4e69c07e5070095321\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e187b44f45cc7dfafb42ca2ce06c0bed739005de32ca430834d112e4f4b86c66\"" Dec 13 22:58:50.032528 containerd[1608]: time="2025-12-13T22:58:50.032492235Z" level=info msg="StartContainer for \"e187b44f45cc7dfafb42ca2ce06c0bed739005de32ca430834d112e4f4b86c66\"" Dec 13 22:58:50.034015 containerd[1608]: time="2025-12-13T22:58:50.033975629Z" level=info msg="connecting to shim e187b44f45cc7dfafb42ca2ce06c0bed739005de32ca430834d112e4f4b86c66" address="unix:///run/containerd/s/961f59da52e172e5b597f6e4a7ca4fb5e3be2b791e009a1d55bdb37510efc9e4" protocol=ttrpc version=3 Dec 13 22:58:50.064796 systemd[1]: Started cri-containerd-e187b44f45cc7dfafb42ca2ce06c0bed739005de32ca430834d112e4f4b86c66.scope - libcontainer container e187b44f45cc7dfafb42ca2ce06c0bed739005de32ca430834d112e4f4b86c66. Dec 13 22:58:50.075000 audit: BPF prog-id=165 op=LOAD Dec 13 22:58:50.076000 audit: BPF prog-id=166 op=LOAD Dec 13 22:58:50.076000 audit[3338]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=3198 pid=3338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:50.076000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531383762343466343563633764666166623432636132636530366330 Dec 13 22:58:50.076000 audit: BPF prog-id=166 op=UNLOAD Dec 13 22:58:50.076000 audit[3338]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3198 pid=3338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:50.076000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531383762343466343563633764666166623432636132636530366330 Dec 13 22:58:50.076000 audit: BPF prog-id=167 op=LOAD Dec 13 22:58:50.076000 audit[3338]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=3198 pid=3338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:50.076000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531383762343466343563633764666166623432636132636530366330 Dec 13 22:58:50.076000 audit: BPF prog-id=168 op=LOAD Dec 13 22:58:50.076000 audit[3338]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=3198 pid=3338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:50.076000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531383762343466343563633764666166623432636132636530366330 Dec 13 22:58:50.076000 audit: BPF prog-id=168 op=UNLOAD Dec 13 22:58:50.076000 audit[3338]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3198 pid=3338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:50.076000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531383762343466343563633764666166623432636132636530366330 Dec 13 22:58:50.076000 audit: BPF prog-id=167 op=UNLOAD Dec 13 22:58:50.076000 audit[3338]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3198 pid=3338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:50.076000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531383762343466343563633764666166623432636132636530366330 Dec 13 22:58:50.076000 audit: BPF prog-id=169 op=LOAD Dec 13 22:58:50.076000 audit[3338]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=3198 pid=3338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:50.076000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531383762343466343563633764666166623432636132636530366330 Dec 13 22:58:50.104819 containerd[1608]: time="2025-12-13T22:58:50.104750725Z" level=info msg="StartContainer for \"e187b44f45cc7dfafb42ca2ce06c0bed739005de32ca430834d112e4f4b86c66\" returns successfully" Dec 13 22:58:50.114340 kubelet[2752]: E1213 22:58:50.114312 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:50.130446 kubelet[2752]: I1213 22:58:50.130271 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-c787844d-mz2cb" podStartSLOduration=1.561036864 podStartE2EDuration="3.130254335s" podCreationTimestamp="2025-12-13 22:58:47 +0000 UTC" firstStartedPulling="2025-12-13 22:58:48.42947175 +0000 UTC m=+25.543232154" lastFinishedPulling="2025-12-13 22:58:49.998689181 +0000 UTC m=+27.112449625" observedRunningTime="2025-12-13 22:58:50.129776977 +0000 UTC m=+27.243537461" watchObservedRunningTime="2025-12-13 22:58:50.130254335 +0000 UTC m=+27.244014779" Dec 13 22:58:50.156632 kubelet[2752]: E1213 22:58:50.156601 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:50.156632 kubelet[2752]: W1213 22:58:50.156627 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:50.156784 kubelet[2752]: E1213 22:58:50.156654 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:50.156880 kubelet[2752]: E1213 22:58:50.156827 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:50.156880 kubelet[2752]: W1213 22:58:50.156838 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:50.156880 kubelet[2752]: E1213 22:58:50.156872 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:50.157024 kubelet[2752]: E1213 22:58:50.157010 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:50.157024 kubelet[2752]: W1213 22:58:50.157021 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:50.157078 kubelet[2752]: E1213 22:58:50.157030 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:50.157168 kubelet[2752]: E1213 22:58:50.157157 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:50.157168 kubelet[2752]: W1213 22:58:50.157167 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:50.157238 kubelet[2752]: E1213 22:58:50.157184 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:50.157388 kubelet[2752]: E1213 22:58:50.157374 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:50.157388 kubelet[2752]: W1213 22:58:50.157386 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:50.157448 kubelet[2752]: E1213 22:58:50.157394 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:50.157572 kubelet[2752]: E1213 22:58:50.157532 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:50.157602 kubelet[2752]: W1213 22:58:50.157548 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:50.157602 kubelet[2752]: E1213 22:58:50.157583 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:50.157722 kubelet[2752]: E1213 22:58:50.157711 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:50.157754 kubelet[2752]: W1213 22:58:50.157722 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:50.157754 kubelet[2752]: E1213 22:58:50.157731 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:50.157880 kubelet[2752]: E1213 22:58:50.157868 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:50.157903 kubelet[2752]: W1213 22:58:50.157889 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:50.157903 kubelet[2752]: E1213 22:58:50.157898 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:50.158096 kubelet[2752]: E1213 22:58:50.158084 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:50.158096 kubelet[2752]: W1213 22:58:50.158095 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:50.158153 kubelet[2752]: E1213 22:58:50.158103 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:50.158236 kubelet[2752]: E1213 22:58:50.158224 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:50.158236 kubelet[2752]: W1213 22:58:50.158234 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:50.158278 kubelet[2752]: E1213 22:58:50.158241 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:50.158382 kubelet[2752]: E1213 22:58:50.158370 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:50.158382 kubelet[2752]: W1213 22:58:50.158380 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:50.158435 kubelet[2752]: E1213 22:58:50.158389 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:50.158630 kubelet[2752]: E1213 22:58:50.158618 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:50.158664 kubelet[2752]: W1213 22:58:50.158631 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:50.158664 kubelet[2752]: E1213 22:58:50.158641 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:50.159970 kubelet[2752]: E1213 22:58:50.159941 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:50.159970 kubelet[2752]: W1213 22:58:50.159960 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:50.160051 kubelet[2752]: E1213 22:58:50.159975 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:50.160325 kubelet[2752]: E1213 22:58:50.160294 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:50.160325 kubelet[2752]: W1213 22:58:50.160316 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:50.160375 kubelet[2752]: E1213 22:58:50.160328 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:50.160489 kubelet[2752]: E1213 22:58:50.160477 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:50.160587 kubelet[2752]: W1213 22:58:50.160487 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:50.160587 kubelet[2752]: E1213 22:58:50.160496 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:50.260249 kubelet[2752]: E1213 22:58:50.260000 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:50.260249 kubelet[2752]: W1213 22:58:50.260028 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:50.260249 kubelet[2752]: E1213 22:58:50.260048 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:50.260545 kubelet[2752]: E1213 22:58:50.260458 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:50.260545 kubelet[2752]: W1213 22:58:50.260472 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:50.260545 kubelet[2752]: E1213 22:58:50.260487 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:50.260833 kubelet[2752]: E1213 22:58:50.260795 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:50.260833 kubelet[2752]: W1213 22:58:50.260813 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:50.260833 kubelet[2752]: E1213 22:58:50.260828 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:50.261669 kubelet[2752]: E1213 22:58:50.261642 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:50.261669 kubelet[2752]: W1213 22:58:50.261665 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:50.261782 kubelet[2752]: E1213 22:58:50.261685 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:50.262097 kubelet[2752]: E1213 22:58:50.262059 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:50.262097 kubelet[2752]: W1213 22:58:50.262076 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:50.262248 kubelet[2752]: E1213 22:58:50.262136 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:50.262248 kubelet[2752]: E1213 22:58:50.262233 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:50.262248 kubelet[2752]: W1213 22:58:50.262241 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:50.262426 kubelet[2752]: E1213 22:58:50.262289 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:50.262518 kubelet[2752]: E1213 22:58:50.262501 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:50.262518 kubelet[2752]: W1213 22:58:50.262515 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:50.262713 kubelet[2752]: E1213 22:58:50.262689 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:50.262713 kubelet[2752]: W1213 22:58:50.262702 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:50.262713 kubelet[2752]: E1213 22:58:50.262714 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:50.263112 kubelet[2752]: E1213 22:58:50.263069 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:50.264613 kubelet[2752]: E1213 22:58:50.263298 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:50.264613 kubelet[2752]: W1213 22:58:50.263321 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:50.264613 kubelet[2752]: E1213 22:58:50.263335 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:50.264940 kubelet[2752]: E1213 22:58:50.264908 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:50.264940 kubelet[2752]: W1213 22:58:50.264927 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:50.264940 kubelet[2752]: E1213 22:58:50.264947 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:50.265176 kubelet[2752]: E1213 22:58:50.265160 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:50.265176 kubelet[2752]: W1213 22:58:50.265173 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:50.265245 kubelet[2752]: E1213 22:58:50.265209 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:50.265357 kubelet[2752]: E1213 22:58:50.265342 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:50.265357 kubelet[2752]: W1213 22:58:50.265355 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:50.265425 kubelet[2752]: E1213 22:58:50.265371 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:50.266441 kubelet[2752]: E1213 22:58:50.266392 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:50.266441 kubelet[2752]: W1213 22:58:50.266414 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:50.266441 kubelet[2752]: E1213 22:58:50.266440 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:50.266795 kubelet[2752]: E1213 22:58:50.266768 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:50.266795 kubelet[2752]: W1213 22:58:50.266783 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:50.266795 kubelet[2752]: E1213 22:58:50.266824 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:50.266795 kubelet[2752]: E1213 22:58:50.266970 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:50.266795 kubelet[2752]: W1213 22:58:50.266980 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:50.266795 kubelet[2752]: E1213 22:58:50.266991 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:50.267361 kubelet[2752]: E1213 22:58:50.267343 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:50.267429 kubelet[2752]: W1213 22:58:50.267416 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:50.267497 kubelet[2752]: E1213 22:58:50.267485 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:50.268321 kubelet[2752]: E1213 22:58:50.268281 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:50.268321 kubelet[2752]: W1213 22:58:50.268311 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:50.268399 kubelet[2752]: E1213 22:58:50.268339 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:50.268782 kubelet[2752]: E1213 22:58:50.268750 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:50.268782 kubelet[2752]: W1213 22:58:50.268766 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:50.268782 kubelet[2752]: E1213 22:58:50.268779 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:51.120793 kubelet[2752]: I1213 22:58:51.120756 2752 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 22:58:51.121579 kubelet[2752]: E1213 22:58:51.121536 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:51.166983 kubelet[2752]: E1213 22:58:51.166956 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:51.166983 kubelet[2752]: W1213 22:58:51.166978 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:51.167143 kubelet[2752]: E1213 22:58:51.167044 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:51.167233 kubelet[2752]: E1213 22:58:51.167221 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:51.167268 kubelet[2752]: W1213 22:58:51.167234 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:51.167268 kubelet[2752]: E1213 22:58:51.167244 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:51.167461 kubelet[2752]: E1213 22:58:51.167446 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:51.167499 kubelet[2752]: W1213 22:58:51.167464 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:51.167499 kubelet[2752]: E1213 22:58:51.167474 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:51.167712 kubelet[2752]: E1213 22:58:51.167698 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:51.167712 kubelet[2752]: W1213 22:58:51.167710 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:51.167781 kubelet[2752]: E1213 22:58:51.167720 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:51.167924 kubelet[2752]: E1213 22:58:51.167910 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:51.167924 kubelet[2752]: W1213 22:58:51.167923 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:51.167987 kubelet[2752]: E1213 22:58:51.167941 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:51.168114 kubelet[2752]: E1213 22:58:51.168104 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:51.168114 kubelet[2752]: W1213 22:58:51.168114 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:51.168180 kubelet[2752]: E1213 22:58:51.168122 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:51.168304 kubelet[2752]: E1213 22:58:51.168286 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:51.168345 kubelet[2752]: W1213 22:58:51.168306 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:51.168345 kubelet[2752]: E1213 22:58:51.168317 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:51.168547 kubelet[2752]: E1213 22:58:51.168533 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:51.168547 kubelet[2752]: W1213 22:58:51.168546 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:51.168630 kubelet[2752]: E1213 22:58:51.168570 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:51.168804 kubelet[2752]: E1213 22:58:51.168793 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:51.168834 kubelet[2752]: W1213 22:58:51.168805 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:51.168834 kubelet[2752]: E1213 22:58:51.168816 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:51.168966 kubelet[2752]: E1213 22:58:51.168955 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:51.168966 kubelet[2752]: W1213 22:58:51.168966 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:51.169018 kubelet[2752]: E1213 22:58:51.168974 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:51.169150 kubelet[2752]: E1213 22:58:51.169138 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:51.169150 kubelet[2752]: W1213 22:58:51.169150 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:51.169212 kubelet[2752]: E1213 22:58:51.169158 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:51.169325 kubelet[2752]: E1213 22:58:51.169309 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:51.169367 kubelet[2752]: W1213 22:58:51.169321 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:51.169367 kubelet[2752]: E1213 22:58:51.169336 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:51.169574 kubelet[2752]: E1213 22:58:51.169545 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:51.169574 kubelet[2752]: W1213 22:58:51.169573 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:51.169640 kubelet[2752]: E1213 22:58:51.169583 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:51.169801 kubelet[2752]: E1213 22:58:51.169785 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:51.169801 kubelet[2752]: W1213 22:58:51.169798 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:51.169851 kubelet[2752]: E1213 22:58:51.169809 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:51.169981 kubelet[2752]: E1213 22:58:51.169968 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:51.169981 kubelet[2752]: W1213 22:58:51.169979 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:51.170042 kubelet[2752]: E1213 22:58:51.169987 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:51.170258 kubelet[2752]: E1213 22:58:51.170243 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:51.170258 kubelet[2752]: W1213 22:58:51.170257 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:51.170333 kubelet[2752]: E1213 22:58:51.170270 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:51.170539 kubelet[2752]: E1213 22:58:51.170525 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:51.170590 kubelet[2752]: W1213 22:58:51.170538 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:51.171209 kubelet[2752]: E1213 22:58:51.171175 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:51.171209 kubelet[2752]: W1213 22:58:51.171194 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:51.171209 kubelet[2752]: E1213 22:58:51.171206 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:51.171525 kubelet[2752]: E1213 22:58:51.171511 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:51.171525 kubelet[2752]: W1213 22:58:51.171523 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:51.171585 kubelet[2752]: E1213 22:58:51.171534 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:51.171852 kubelet[2752]: E1213 22:58:51.171838 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:51.171875 kubelet[2752]: W1213 22:58:51.171852 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:51.171875 kubelet[2752]: E1213 22:58:51.171864 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:51.172648 kubelet[2752]: E1213 22:58:51.172630 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:51.172648 kubelet[2752]: W1213 22:58:51.172646 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:51.172709 kubelet[2752]: E1213 22:58:51.172661 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:51.173530 kubelet[2752]: E1213 22:58:51.173499 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:51.173530 kubelet[2752]: W1213 22:58:51.173517 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:51.173530 kubelet[2752]: E1213 22:58:51.173530 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:51.176635 kubelet[2752]: E1213 22:58:51.176602 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:51.176965 kubelet[2752]: E1213 22:58:51.176935 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:51.176965 kubelet[2752]: W1213 22:58:51.176952 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:51.177028 kubelet[2752]: E1213 22:58:51.176972 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:51.177247 kubelet[2752]: E1213 22:58:51.177211 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:51.177247 kubelet[2752]: W1213 22:58:51.177227 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:51.177247 kubelet[2752]: E1213 22:58:51.177241 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:51.177503 kubelet[2752]: E1213 22:58:51.177489 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:51.177529 kubelet[2752]: W1213 22:58:51.177503 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:51.177529 kubelet[2752]: E1213 22:58:51.177524 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:51.177792 kubelet[2752]: E1213 22:58:51.177779 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:51.177823 kubelet[2752]: W1213 22:58:51.177792 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:51.177823 kubelet[2752]: E1213 22:58:51.177806 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:51.177996 kubelet[2752]: E1213 22:58:51.177985 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:51.178024 kubelet[2752]: W1213 22:58:51.177996 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:51.178096 kubelet[2752]: E1213 22:58:51.178076 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:51.178208 kubelet[2752]: E1213 22:58:51.178187 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:51.178208 kubelet[2752]: W1213 22:58:51.178200 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:51.178263 kubelet[2752]: E1213 22:58:51.178210 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:51.178471 kubelet[2752]: E1213 22:58:51.178452 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:51.178471 kubelet[2752]: W1213 22:58:51.178467 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:51.178549 kubelet[2752]: E1213 22:58:51.178482 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:51.178686 kubelet[2752]: E1213 22:58:51.178671 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:51.178686 kubelet[2752]: W1213 22:58:51.178683 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:51.178897 kubelet[2752]: E1213 22:58:51.178695 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:51.178992 kubelet[2752]: E1213 22:58:51.178977 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:51.179042 kubelet[2752]: W1213 22:58:51.179031 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:51.179106 kubelet[2752]: E1213 22:58:51.179095 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:51.179408 kubelet[2752]: E1213 22:58:51.179387 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:51.179445 kubelet[2752]: W1213 22:58:51.179405 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:51.179445 kubelet[2752]: E1213 22:58:51.179441 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:51.179853 kubelet[2752]: E1213 22:58:51.179822 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 22:58:51.179853 kubelet[2752]: W1213 22:58:51.179840 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 22:58:51.179853 kubelet[2752]: E1213 22:58:51.179852 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 22:58:51.350396 containerd[1608]: time="2025-12-13T22:58:51.350043586Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 22:58:51.350914 containerd[1608]: time="2025-12-13T22:58:51.350876543Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4263307" Dec 13 22:58:51.352420 containerd[1608]: time="2025-12-13T22:58:51.352392337Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 22:58:51.354461 containerd[1608]: time="2025-12-13T22:58:51.354408089Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 22:58:51.355122 containerd[1608]: time="2025-12-13T22:58:51.355088686Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.356220906s" Dec 13 22:58:51.355184 containerd[1608]: time="2025-12-13T22:58:51.355120046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 13 22:58:51.358319 containerd[1608]: time="2025-12-13T22:58:51.358193353Z" level=info msg="CreateContainer within sandbox \"d926c12f493931dc5bfbfffa44b05adc568fa123b4f6943556c8ff92e54b8328\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 22:58:51.367409 containerd[1608]: time="2025-12-13T22:58:51.367353957Z" level=info msg="Container f9063357e1f1a8f31f22bf315e088516f4097bce8c58d3ea1f8a9eed1f07c691: CDI devices from CRI Config.CDIDevices: []" Dec 13 22:58:51.370993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount701686330.mount: Deactivated successfully. Dec 13 22:58:51.378173 containerd[1608]: time="2025-12-13T22:58:51.378118913Z" level=info msg="CreateContainer within sandbox \"d926c12f493931dc5bfbfffa44b05adc568fa123b4f6943556c8ff92e54b8328\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f9063357e1f1a8f31f22bf315e088516f4097bce8c58d3ea1f8a9eed1f07c691\"" Dec 13 22:58:51.378682 containerd[1608]: time="2025-12-13T22:58:51.378651711Z" level=info msg="StartContainer for \"f9063357e1f1a8f31f22bf315e088516f4097bce8c58d3ea1f8a9eed1f07c691\"" Dec 13 22:58:51.380581 containerd[1608]: time="2025-12-13T22:58:51.380116025Z" level=info msg="connecting to shim f9063357e1f1a8f31f22bf315e088516f4097bce8c58d3ea1f8a9eed1f07c691" address="unix:///run/containerd/s/776a40e8aab695a63079e868f98c76ba56b788fe81bf009a8ca6b8886fd11e91" protocol=ttrpc version=3 Dec 13 22:58:51.401787 systemd[1]: Started cri-containerd-f9063357e1f1a8f31f22bf315e088516f4097bce8c58d3ea1f8a9eed1f07c691.scope - libcontainer container f9063357e1f1a8f31f22bf315e088516f4097bce8c58d3ea1f8a9eed1f07c691. Dec 13 22:58:51.460000 audit: BPF prog-id=170 op=LOAD Dec 13 22:58:51.462982 kernel: kauditd_printk_skb: 86 callbacks suppressed Dec 13 22:58:51.463042 kernel: audit: type=1334 audit(1765666731.460:558): prog-id=170 op=LOAD Dec 13 22:58:51.463066 kernel: audit: type=1300 audit(1765666731.460:558): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3287 pid=3448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:51.460000 audit[3448]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3287 pid=3448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:51.460000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639303633333537653166316138663331663232626633313565303838 Dec 13 22:58:51.468755 kernel: audit: type=1327 audit(1765666731.460:558): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639303633333537653166316138663331663232626633313565303838 Dec 13 22:58:51.460000 audit: BPF prog-id=171 op=LOAD Dec 13 22:58:51.460000 audit[3448]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3287 pid=3448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:51.472387 kernel: audit: type=1334 audit(1765666731.460:559): prog-id=171 op=LOAD Dec 13 22:58:51.472611 kernel: audit: type=1300 audit(1765666731.460:559): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3287 pid=3448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:51.460000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639303633333537653166316138663331663232626633313565303838 Dec 13 22:58:51.475528 kernel: audit: type=1327 audit(1765666731.460:559): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639303633333537653166316138663331663232626633313565303838 Dec 13 22:58:51.475673 kernel: audit: type=1334 audit(1765666731.460:560): prog-id=171 op=UNLOAD Dec 13 22:58:51.460000 audit: BPF prog-id=171 op=UNLOAD Dec 13 22:58:51.460000 audit[3448]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3287 pid=3448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:51.479316 kernel: audit: type=1300 audit(1765666731.460:560): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3287 pid=3448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:51.460000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639303633333537653166316138663331663232626633313565303838 Dec 13 22:58:51.482460 kernel: audit: type=1327 audit(1765666731.460:560): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639303633333537653166316138663331663232626633313565303838 Dec 13 22:58:51.460000 audit: BPF prog-id=170 op=UNLOAD Dec 13 22:58:51.483655 kernel: audit: type=1334 audit(1765666731.460:561): prog-id=170 op=UNLOAD Dec 13 22:58:51.460000 audit[3448]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3287 pid=3448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:51.460000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639303633333537653166316138663331663232626633313565303838 Dec 13 22:58:51.460000 audit: BPF prog-id=172 op=LOAD Dec 13 22:58:51.460000 audit[3448]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3287 pid=3448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:51.460000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639303633333537653166316138663331663232626633313565303838 Dec 13 22:58:51.497650 containerd[1608]: time="2025-12-13T22:58:51.497035634Z" level=info msg="StartContainer for \"f9063357e1f1a8f31f22bf315e088516f4097bce8c58d3ea1f8a9eed1f07c691\" returns successfully" Dec 13 22:58:51.513536 systemd[1]: cri-containerd-f9063357e1f1a8f31f22bf315e088516f4097bce8c58d3ea1f8a9eed1f07c691.scope: Deactivated successfully. Dec 13 22:58:51.518998 containerd[1608]: time="2025-12-13T22:58:51.518949386Z" level=info msg="received container exit event container_id:\"f9063357e1f1a8f31f22bf315e088516f4097bce8c58d3ea1f8a9eed1f07c691\" id:\"f9063357e1f1a8f31f22bf315e088516f4097bce8c58d3ea1f8a9eed1f07c691\" pid:3460 exited_at:{seconds:1765666731 nanos:518150589}" Dec 13 22:58:51.518000 audit: BPF prog-id=172 op=UNLOAD Dec 13 22:58:51.540709 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9063357e1f1a8f31f22bf315e088516f4097bce8c58d3ea1f8a9eed1f07c691-rootfs.mount: Deactivated successfully. Dec 13 22:58:52.021904 kubelet[2752]: E1213 22:58:52.021848 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8dxl2" podUID="e6e20487-ee64-4317-b075-5244d40e7b5a" Dec 13 22:58:52.120088 kubelet[2752]: E1213 22:58:52.119876 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:52.121817 containerd[1608]: time="2025-12-13T22:58:52.121756987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 13 22:58:53.861135 kubelet[2752]: I1213 22:58:53.860925 2752 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 22:58:53.861485 kubelet[2752]: E1213 22:58:53.861270 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:53.893000 audit[3504]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=3504 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:58:53.893000 audit[3504]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff92aaa90 a2=0 a3=1 items=0 ppid=2863 pid=3504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:53.893000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:58:53.899000 audit[3504]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=3504 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:58:53.899000 audit[3504]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=fffff92aaa90 a2=0 a3=1 items=0 ppid=2863 pid=3504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:53.899000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:58:54.022482 kubelet[2752]: E1213 22:58:54.022435 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8dxl2" podUID="e6e20487-ee64-4317-b075-5244d40e7b5a" Dec 13 22:58:54.125138 kubelet[2752]: E1213 22:58:54.125028 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:54.363599 containerd[1608]: time="2025-12-13T22:58:54.363188722Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 22:58:54.363983 containerd[1608]: time="2025-12-13T22:58:54.363804480Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Dec 13 22:58:54.364648 containerd[1608]: time="2025-12-13T22:58:54.364615477Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 22:58:54.367536 containerd[1608]: time="2025-12-13T22:58:54.367497788Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 22:58:54.368245 containerd[1608]: time="2025-12-13T22:58:54.368205905Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.246410718s" Dec 13 22:58:54.368245 containerd[1608]: time="2025-12-13T22:58:54.368238185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 13 22:58:54.370400 containerd[1608]: time="2025-12-13T22:58:54.370357578Z" level=info msg="CreateContainer within sandbox \"d926c12f493931dc5bfbfffa44b05adc568fa123b4f6943556c8ff92e54b8328\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 22:58:54.380622 containerd[1608]: time="2025-12-13T22:58:54.380339825Z" level=info msg="Container 1b6fa4c56712c0ffb46e89ab4ea0748aacc0e7226a77b4022c2cdd670bc295ad: CDI devices from CRI Config.CDIDevices: []" Dec 13 22:58:54.387522 containerd[1608]: time="2025-12-13T22:58:54.387478481Z" level=info msg="CreateContainer within sandbox \"d926c12f493931dc5bfbfffa44b05adc568fa123b4f6943556c8ff92e54b8328\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1b6fa4c56712c0ffb46e89ab4ea0748aacc0e7226a77b4022c2cdd670bc295ad\"" Dec 13 22:58:54.388282 containerd[1608]: time="2025-12-13T22:58:54.388240039Z" level=info msg="StartContainer for \"1b6fa4c56712c0ffb46e89ab4ea0748aacc0e7226a77b4022c2cdd670bc295ad\"" Dec 13 22:58:54.389936 containerd[1608]: time="2025-12-13T22:58:54.389894833Z" level=info msg="connecting to shim 1b6fa4c56712c0ffb46e89ab4ea0748aacc0e7226a77b4022c2cdd670bc295ad" address="unix:///run/containerd/s/776a40e8aab695a63079e868f98c76ba56b788fe81bf009a8ca6b8886fd11e91" protocol=ttrpc version=3 Dec 13 22:58:54.414826 systemd[1]: Started cri-containerd-1b6fa4c56712c0ffb46e89ab4ea0748aacc0e7226a77b4022c2cdd670bc295ad.scope - libcontainer container 1b6fa4c56712c0ffb46e89ab4ea0748aacc0e7226a77b4022c2cdd670bc295ad. Dec 13 22:58:54.487000 audit: BPF prog-id=173 op=LOAD Dec 13 22:58:54.487000 audit[3509]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3287 pid=3509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:54.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162366661346335363731326330666662343665383961623465613037 Dec 13 22:58:54.487000 audit: BPF prog-id=174 op=LOAD Dec 13 22:58:54.487000 audit[3509]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3287 pid=3509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:54.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162366661346335363731326330666662343665383961623465613037 Dec 13 22:58:54.487000 audit: BPF prog-id=174 op=UNLOAD Dec 13 22:58:54.487000 audit[3509]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3287 pid=3509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:54.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162366661346335363731326330666662343665383961623465613037 Dec 13 22:58:54.487000 audit: BPF prog-id=173 op=UNLOAD Dec 13 22:58:54.487000 audit[3509]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3287 pid=3509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:54.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162366661346335363731326330666662343665383961623465613037 Dec 13 22:58:54.487000 audit: BPF prog-id=175 op=LOAD Dec 13 22:58:54.487000 audit[3509]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3287 pid=3509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:58:54.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162366661346335363731326330666662343665383961623465613037 Dec 13 22:58:54.508545 containerd[1608]: time="2025-12-13T22:58:54.508441720Z" level=info msg="StartContainer for \"1b6fa4c56712c0ffb46e89ab4ea0748aacc0e7226a77b4022c2cdd670bc295ad\" returns successfully" Dec 13 22:58:55.055704 systemd[1]: cri-containerd-1b6fa4c56712c0ffb46e89ab4ea0748aacc0e7226a77b4022c2cdd670bc295ad.scope: Deactivated successfully. Dec 13 22:58:55.056121 systemd[1]: cri-containerd-1b6fa4c56712c0ffb46e89ab4ea0748aacc0e7226a77b4022c2cdd670bc295ad.scope: Consumed 458ms CPU time, 200.5M memory peak, 2M read from disk, 165.9M written to disk. Dec 13 22:58:55.057229 containerd[1608]: time="2025-12-13T22:58:55.057194869Z" level=info msg="received container exit event container_id:\"1b6fa4c56712c0ffb46e89ab4ea0748aacc0e7226a77b4022c2cdd670bc295ad\" id:\"1b6fa4c56712c0ffb46e89ab4ea0748aacc0e7226a77b4022c2cdd670bc295ad\" pid:3521 exited_at:{seconds:1765666735 nanos:56629031}" Dec 13 22:58:55.059000 audit: BPF prog-id=175 op=UNLOAD Dec 13 22:58:55.076937 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b6fa4c56712c0ffb46e89ab4ea0748aacc0e7226a77b4022c2cdd670bc295ad-rootfs.mount: Deactivated successfully. Dec 13 22:58:55.122332 kubelet[2752]: I1213 22:58:55.122186 2752 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 13 22:58:55.131353 kubelet[2752]: E1213 22:58:55.131325 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:55.167493 systemd[1]: Created slice kubepods-burstable-pod8456e800_6c4f_44dd_baf5_c2d42ab5dd0e.slice - libcontainer container kubepods-burstable-pod8456e800_6c4f_44dd_baf5_c2d42ab5dd0e.slice. Dec 13 22:58:55.187755 systemd[1]: Created slice kubepods-besteffort-pod35b16e82_6b93_4598_9c60_bbc71d0b419a.slice - libcontainer container kubepods-besteffort-pod35b16e82_6b93_4598_9c60_bbc71d0b419a.slice. Dec 13 22:58:55.193047 systemd[1]: Created slice kubepods-burstable-pod1927b80b_e803_4102_9a4e_f1521044c395.slice - libcontainer container kubepods-burstable-pod1927b80b_e803_4102_9a4e_f1521044c395.slice. Dec 13 22:58:55.198781 systemd[1]: Created slice kubepods-besteffort-podde321aa1_ac48_4c1f_9093_69f544b891b9.slice - libcontainer container kubepods-besteffort-podde321aa1_ac48_4c1f_9093_69f544b891b9.slice. Dec 13 22:58:55.200314 kubelet[2752]: I1213 22:58:55.200255 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de321aa1-ac48-4c1f-9093-69f544b891b9-tigera-ca-bundle\") pod \"calico-kube-controllers-7dc99d86df-qrdhj\" (UID: \"de321aa1-ac48-4c1f-9093-69f544b891b9\") " pod="calico-system/calico-kube-controllers-7dc99d86df-qrdhj" Dec 13 22:58:55.200314 kubelet[2752]: I1213 22:58:55.200307 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdcqr\" (UniqueName: \"kubernetes.io/projected/9fe07c94-e384-4193-9d4b-ef9d906eb265-kube-api-access-vdcqr\") pod \"goldmane-666569f655-vmghn\" (UID: \"9fe07c94-e384-4193-9d4b-ef9d906eb265\") " pod="calico-system/goldmane-666569f655-vmghn" Dec 13 22:58:55.200437 kubelet[2752]: I1213 22:58:55.200327 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8456e800-6c4f-44dd-baf5-c2d42ab5dd0e-config-volume\") pod \"coredns-668d6bf9bc-7psl5\" (UID: \"8456e800-6c4f-44dd-baf5-c2d42ab5dd0e\") " pod="kube-system/coredns-668d6bf9bc-7psl5" Dec 13 22:58:55.200437 kubelet[2752]: I1213 22:58:55.200347 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fe07c94-e384-4193-9d4b-ef9d906eb265-config\") pod \"goldmane-666569f655-vmghn\" (UID: \"9fe07c94-e384-4193-9d4b-ef9d906eb265\") " pod="calico-system/goldmane-666569f655-vmghn" Dec 13 22:58:55.200437 kubelet[2752]: I1213 22:58:55.200363 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6n4d\" (UniqueName: \"kubernetes.io/projected/8456e800-6c4f-44dd-baf5-c2d42ab5dd0e-kube-api-access-s6n4d\") pod \"coredns-668d6bf9bc-7psl5\" (UID: \"8456e800-6c4f-44dd-baf5-c2d42ab5dd0e\") " pod="kube-system/coredns-668d6bf9bc-7psl5" Dec 13 22:58:55.200437 kubelet[2752]: I1213 22:58:55.200381 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/613acf46-413a-48c0-b66e-dada1b02aafe-whisker-ca-bundle\") pod \"whisker-57cccdb76b-7qgn9\" (UID: \"613acf46-413a-48c0-b66e-dada1b02aafe\") " pod="calico-system/whisker-57cccdb76b-7qgn9" Dec 13 22:58:55.200437 kubelet[2752]: I1213 22:58:55.200397 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ef0b8707-0b6c-4822-b958-0aa6bda67c50-calico-apiserver-certs\") pod \"calico-apiserver-7d74677ddd-xdvvs\" (UID: \"ef0b8707-0b6c-4822-b958-0aa6bda67c50\") " pod="calico-apiserver/calico-apiserver-7d74677ddd-xdvvs" Dec 13 22:58:55.200573 kubelet[2752]: I1213 22:58:55.200412 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9fe07c94-e384-4193-9d4b-ef9d906eb265-goldmane-key-pair\") pod \"goldmane-666569f655-vmghn\" (UID: \"9fe07c94-e384-4193-9d4b-ef9d906eb265\") " pod="calico-system/goldmane-666569f655-vmghn" Dec 13 22:58:55.200573 kubelet[2752]: I1213 22:58:55.200450 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v4vq\" (UniqueName: \"kubernetes.io/projected/de321aa1-ac48-4c1f-9093-69f544b891b9-kube-api-access-7v4vq\") pod \"calico-kube-controllers-7dc99d86df-qrdhj\" (UID: \"de321aa1-ac48-4c1f-9093-69f544b891b9\") " pod="calico-system/calico-kube-controllers-7dc99d86df-qrdhj" Dec 13 22:58:55.200573 kubelet[2752]: I1213 22:58:55.200468 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fm82\" (UniqueName: \"kubernetes.io/projected/613acf46-413a-48c0-b66e-dada1b02aafe-kube-api-access-5fm82\") pod \"whisker-57cccdb76b-7qgn9\" (UID: \"613acf46-413a-48c0-b66e-dada1b02aafe\") " pod="calico-system/whisker-57cccdb76b-7qgn9" Dec 13 22:58:55.200573 kubelet[2752]: I1213 22:58:55.200486 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbqbj\" (UniqueName: \"kubernetes.io/projected/35b16e82-6b93-4598-9c60-bbc71d0b419a-kube-api-access-kbqbj\") pod \"calico-apiserver-7d74677ddd-w5hbt\" (UID: \"35b16e82-6b93-4598-9c60-bbc71d0b419a\") " pod="calico-apiserver/calico-apiserver-7d74677ddd-w5hbt" Dec 13 22:58:55.200987 kubelet[2752]: I1213 22:58:55.200507 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hm9z\" (UniqueName: \"kubernetes.io/projected/ef0b8707-0b6c-4822-b958-0aa6bda67c50-kube-api-access-6hm9z\") pod \"calico-apiserver-7d74677ddd-xdvvs\" (UID: \"ef0b8707-0b6c-4822-b958-0aa6bda67c50\") " pod="calico-apiserver/calico-apiserver-7d74677ddd-xdvvs" Dec 13 22:58:55.201032 kubelet[2752]: I1213 22:58:55.201015 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wlgd\" (UniqueName: \"kubernetes.io/projected/bc906328-edba-4f47-8190-6385bd5de6a4-kube-api-access-5wlgd\") pod \"calico-apiserver-6ddf58565-58qbn\" (UID: \"bc906328-edba-4f47-8190-6385bd5de6a4\") " pod="calico-apiserver/calico-apiserver-6ddf58565-58qbn" Dec 13 22:58:55.201074 kubelet[2752]: I1213 22:58:55.201055 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/35b16e82-6b93-4598-9c60-bbc71d0b419a-calico-apiserver-certs\") pod \"calico-apiserver-7d74677ddd-w5hbt\" (UID: \"35b16e82-6b93-4598-9c60-bbc71d0b419a\") " pod="calico-apiserver/calico-apiserver-7d74677ddd-w5hbt" Dec 13 22:58:55.201115 kubelet[2752]: I1213 22:58:55.201081 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/613acf46-413a-48c0-b66e-dada1b02aafe-whisker-backend-key-pair\") pod \"whisker-57cccdb76b-7qgn9\" (UID: \"613acf46-413a-48c0-b66e-dada1b02aafe\") " pod="calico-system/whisker-57cccdb76b-7qgn9" Dec 13 22:58:55.201115 kubelet[2752]: I1213 22:58:55.201100 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1927b80b-e803-4102-9a4e-f1521044c395-config-volume\") pod \"coredns-668d6bf9bc-w8xcm\" (UID: \"1927b80b-e803-4102-9a4e-f1521044c395\") " pod="kube-system/coredns-668d6bf9bc-w8xcm" Dec 13 22:58:55.201194 kubelet[2752]: I1213 22:58:55.201116 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hdnf\" (UniqueName: \"kubernetes.io/projected/1927b80b-e803-4102-9a4e-f1521044c395-kube-api-access-5hdnf\") pod \"coredns-668d6bf9bc-w8xcm\" (UID: \"1927b80b-e803-4102-9a4e-f1521044c395\") " pod="kube-system/coredns-668d6bf9bc-w8xcm" Dec 13 22:58:55.201194 kubelet[2752]: I1213 22:58:55.201140 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bc906328-edba-4f47-8190-6385bd5de6a4-calico-apiserver-certs\") pod \"calico-apiserver-6ddf58565-58qbn\" (UID: \"bc906328-edba-4f47-8190-6385bd5de6a4\") " pod="calico-apiserver/calico-apiserver-6ddf58565-58qbn" Dec 13 22:58:55.201194 kubelet[2752]: I1213 22:58:55.201186 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fe07c94-e384-4193-9d4b-ef9d906eb265-goldmane-ca-bundle\") pod \"goldmane-666569f655-vmghn\" (UID: \"9fe07c94-e384-4193-9d4b-ef9d906eb265\") " pod="calico-system/goldmane-666569f655-vmghn" Dec 13 22:58:55.211028 systemd[1]: Created slice kubepods-besteffort-podbc906328_edba_4f47_8190_6385bd5de6a4.slice - libcontainer container kubepods-besteffort-podbc906328_edba_4f47_8190_6385bd5de6a4.slice. Dec 13 22:58:55.219855 systemd[1]: Created slice kubepods-besteffort-pod613acf46_413a_48c0_b66e_dada1b02aafe.slice - libcontainer container kubepods-besteffort-pod613acf46_413a_48c0_b66e_dada1b02aafe.slice. Dec 13 22:58:55.228290 systemd[1]: Created slice kubepods-besteffort-pod9fe07c94_e384_4193_9d4b_ef9d906eb265.slice - libcontainer container kubepods-besteffort-pod9fe07c94_e384_4193_9d4b_ef9d906eb265.slice. Dec 13 22:58:55.234142 systemd[1]: Created slice kubepods-besteffort-podef0b8707_0b6c_4822_b958_0aa6bda67c50.slice - libcontainer container kubepods-besteffort-podef0b8707_0b6c_4822_b958_0aa6bda67c50.slice. Dec 13 22:58:55.470580 kubelet[2752]: E1213 22:58:55.470425 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:55.472028 containerd[1608]: time="2025-12-13T22:58:55.471982898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7psl5,Uid:8456e800-6c4f-44dd-baf5-c2d42ab5dd0e,Namespace:kube-system,Attempt:0,}" Dec 13 22:58:55.491780 containerd[1608]: time="2025-12-13T22:58:55.491635997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d74677ddd-w5hbt,Uid:35b16e82-6b93-4598-9c60-bbc71d0b419a,Namespace:calico-apiserver,Attempt:0,}" Dec 13 22:58:55.497050 kubelet[2752]: E1213 22:58:55.496859 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:55.498805 containerd[1608]: time="2025-12-13T22:58:55.498740055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w8xcm,Uid:1927b80b-e803-4102-9a4e-f1521044c395,Namespace:kube-system,Attempt:0,}" Dec 13 22:58:55.504481 containerd[1608]: time="2025-12-13T22:58:55.504441277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dc99d86df-qrdhj,Uid:de321aa1-ac48-4c1f-9093-69f544b891b9,Namespace:calico-system,Attempt:0,}" Dec 13 22:58:55.516330 containerd[1608]: time="2025-12-13T22:58:55.516095641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ddf58565-58qbn,Uid:bc906328-edba-4f47-8190-6385bd5de6a4,Namespace:calico-apiserver,Attempt:0,}" Dec 13 22:58:55.524498 containerd[1608]: time="2025-12-13T22:58:55.524298696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57cccdb76b-7qgn9,Uid:613acf46-413a-48c0-b66e-dada1b02aafe,Namespace:calico-system,Attempt:0,}" Dec 13 22:58:55.533350 containerd[1608]: time="2025-12-13T22:58:55.533251348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vmghn,Uid:9fe07c94-e384-4193-9d4b-ef9d906eb265,Namespace:calico-system,Attempt:0,}" Dec 13 22:58:55.538291 containerd[1608]: time="2025-12-13T22:58:55.538185012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d74677ddd-xdvvs,Uid:ef0b8707-0b6c-4822-b958-0aa6bda67c50,Namespace:calico-apiserver,Attempt:0,}" Dec 13 22:58:55.635950 containerd[1608]: time="2025-12-13T22:58:55.635821829Z" level=error msg="Failed to destroy network for sandbox \"079f79b7b8724e1a93325a7069b0d4f7f623d0d1fdd46385957fd1990f8a28c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 22:58:55.638385 containerd[1608]: time="2025-12-13T22:58:55.638336181Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7psl5,Uid:8456e800-6c4f-44dd-baf5-c2d42ab5dd0e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"079f79b7b8724e1a93325a7069b0d4f7f623d0d1fdd46385957fd1990f8a28c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 22:58:55.640761 containerd[1608]: time="2025-12-13T22:58:55.640705133Z" level=error msg="Failed to destroy network for sandbox \"f6cd8be5b819fd8ea585960dadb445096baa6a8dfc1382c42443aa23cf31ba38\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 22:58:55.642934 kubelet[2752]: E1213 22:58:55.642875 2752 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"079f79b7b8724e1a93325a7069b0d4f7f623d0d1fdd46385957fd1990f8a28c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 22:58:55.643515 containerd[1608]: time="2025-12-13T22:58:55.643463845Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w8xcm,Uid:1927b80b-e803-4102-9a4e-f1521044c395,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6cd8be5b819fd8ea585960dadb445096baa6a8dfc1382c42443aa23cf31ba38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 22:58:55.643915 kubelet[2752]: E1213 22:58:55.643684 2752 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6cd8be5b819fd8ea585960dadb445096baa6a8dfc1382c42443aa23cf31ba38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 22:58:55.645318 kubelet[2752]: E1213 22:58:55.645065 2752 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"079f79b7b8724e1a93325a7069b0d4f7f623d0d1fdd46385957fd1990f8a28c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7psl5" Dec 13 22:58:55.645318 kubelet[2752]: E1213 22:58:55.645111 2752 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"079f79b7b8724e1a93325a7069b0d4f7f623d0d1fdd46385957fd1990f8a28c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7psl5" Dec 13 22:58:55.645318 kubelet[2752]: E1213 22:58:55.645165 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7psl5_kube-system(8456e800-6c4f-44dd-baf5-c2d42ab5dd0e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7psl5_kube-system(8456e800-6c4f-44dd-baf5-c2d42ab5dd0e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"079f79b7b8724e1a93325a7069b0d4f7f623d0d1fdd46385957fd1990f8a28c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7psl5" podUID="8456e800-6c4f-44dd-baf5-c2d42ab5dd0e" Dec 13 22:58:55.645479 kubelet[2752]: E1213 22:58:55.645340 2752 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6cd8be5b819fd8ea585960dadb445096baa6a8dfc1382c42443aa23cf31ba38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-w8xcm" Dec 13 22:58:55.645479 kubelet[2752]: E1213 22:58:55.645377 2752 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6cd8be5b819fd8ea585960dadb445096baa6a8dfc1382c42443aa23cf31ba38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-w8xcm" Dec 13 22:58:55.645479 kubelet[2752]: E1213 22:58:55.645421 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-w8xcm_kube-system(1927b80b-e803-4102-9a4e-f1521044c395)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-w8xcm_kube-system(1927b80b-e803-4102-9a4e-f1521044c395)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f6cd8be5b819fd8ea585960dadb445096baa6a8dfc1382c42443aa23cf31ba38\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-w8xcm" podUID="1927b80b-e803-4102-9a4e-f1521044c395" Dec 13 22:58:55.648039 containerd[1608]: time="2025-12-13T22:58:55.647710712Z" level=error msg="Failed to destroy network for sandbox \"5f16c992f639f315266e209ac6f19d191bffe505ff061373ab72c39926371369\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 22:58:55.648201 containerd[1608]: time="2025-12-13T22:58:55.648127310Z" level=error msg="Failed to destroy network for sandbox \"e9c26aa44c39a977b1f5fb9c9eaf89c05f7af5ebc1becc09f936a96f6ddea157\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 22:58:55.651209 containerd[1608]: time="2025-12-13T22:58:55.650887342Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d74677ddd-xdvvs,Uid:ef0b8707-0b6c-4822-b958-0aa6bda67c50,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f16c992f639f315266e209ac6f19d191bffe505ff061373ab72c39926371369\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 22:58:55.651675 kubelet[2752]: E1213 22:58:55.651496 2752 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f16c992f639f315266e209ac6f19d191bffe505ff061373ab72c39926371369\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 22:58:55.651675 kubelet[2752]: E1213 22:58:55.651574 2752 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f16c992f639f315266e209ac6f19d191bffe505ff061373ab72c39926371369\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d74677ddd-xdvvs" Dec 13 22:58:55.651675 kubelet[2752]: E1213 22:58:55.651594 2752 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f16c992f639f315266e209ac6f19d191bffe505ff061373ab72c39926371369\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d74677ddd-xdvvs" Dec 13 22:58:55.651788 kubelet[2752]: E1213 22:58:55.651629 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d74677ddd-xdvvs_calico-apiserver(ef0b8707-0b6c-4822-b958-0aa6bda67c50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d74677ddd-xdvvs_calico-apiserver(ef0b8707-0b6c-4822-b958-0aa6bda67c50)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f16c992f639f315266e209ac6f19d191bffe505ff061373ab72c39926371369\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d74677ddd-xdvvs" podUID="ef0b8707-0b6c-4822-b958-0aa6bda67c50" Dec 13 22:58:55.652008 containerd[1608]: time="2025-12-13T22:58:55.651967298Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ddf58565-58qbn,Uid:bc906328-edba-4f47-8190-6385bd5de6a4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9c26aa44c39a977b1f5fb9c9eaf89c05f7af5ebc1becc09f936a96f6ddea157\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 22:58:55.653625 kubelet[2752]: E1213 22:58:55.653015 2752 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9c26aa44c39a977b1f5fb9c9eaf89c05f7af5ebc1becc09f936a96f6ddea157\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 22:58:55.653625 kubelet[2752]: E1213 22:58:55.653071 2752 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9c26aa44c39a977b1f5fb9c9eaf89c05f7af5ebc1becc09f936a96f6ddea157\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ddf58565-58qbn" Dec 13 22:58:55.653625 kubelet[2752]: E1213 22:58:55.653086 2752 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9c26aa44c39a977b1f5fb9c9eaf89c05f7af5ebc1becc09f936a96f6ddea157\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ddf58565-58qbn" Dec 13 22:58:55.653758 kubelet[2752]: E1213 22:58:55.653123 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6ddf58565-58qbn_calico-apiserver(bc906328-edba-4f47-8190-6385bd5de6a4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6ddf58565-58qbn_calico-apiserver(bc906328-edba-4f47-8190-6385bd5de6a4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e9c26aa44c39a977b1f5fb9c9eaf89c05f7af5ebc1becc09f936a96f6ddea157\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6ddf58565-58qbn" podUID="bc906328-edba-4f47-8190-6385bd5de6a4" Dec 13 22:58:55.655642 containerd[1608]: time="2025-12-13T22:58:55.655601887Z" level=error msg="Failed to destroy network for sandbox \"689e1341a4cc0ea3c4fbcbe81096f65972edb3f7da862ed23e21efc23f5a5a83\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 22:58:55.656608 containerd[1608]: time="2025-12-13T22:58:55.656570764Z" level=error msg="Failed to destroy network for sandbox \"bd45b273f9137dbfeceec259cd5c153fb3134649c7ad67fd034075949434776b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 22:58:55.658066 containerd[1608]: time="2025-12-13T22:58:55.657789160Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57cccdb76b-7qgn9,Uid:613acf46-413a-48c0-b66e-dada1b02aafe,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"689e1341a4cc0ea3c4fbcbe81096f65972edb3f7da862ed23e21efc23f5a5a83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 22:58:55.658851 kubelet[2752]: E1213 22:58:55.658811 2752 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"689e1341a4cc0ea3c4fbcbe81096f65972edb3f7da862ed23e21efc23f5a5a83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 22:58:55.658916 kubelet[2752]: E1213 22:58:55.658870 2752 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"689e1341a4cc0ea3c4fbcbe81096f65972edb3f7da862ed23e21efc23f5a5a83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57cccdb76b-7qgn9" Dec 13 22:58:55.658916 kubelet[2752]: E1213 22:58:55.658888 2752 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"689e1341a4cc0ea3c4fbcbe81096f65972edb3f7da862ed23e21efc23f5a5a83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57cccdb76b-7qgn9" Dec 13 22:58:55.658983 kubelet[2752]: E1213 22:58:55.658936 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-57cccdb76b-7qgn9_calico-system(613acf46-413a-48c0-b66e-dada1b02aafe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-57cccdb76b-7qgn9_calico-system(613acf46-413a-48c0-b66e-dada1b02aafe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"689e1341a4cc0ea3c4fbcbe81096f65972edb3f7da862ed23e21efc23f5a5a83\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-57cccdb76b-7qgn9" podUID="613acf46-413a-48c0-b66e-dada1b02aafe" Dec 13 22:58:55.660072 containerd[1608]: time="2025-12-13T22:58:55.660012673Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dc99d86df-qrdhj,Uid:de321aa1-ac48-4c1f-9093-69f544b891b9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd45b273f9137dbfeceec259cd5c153fb3134649c7ad67fd034075949434776b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 22:58:55.660379 kubelet[2752]: E1213 22:58:55.660349 2752 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd45b273f9137dbfeceec259cd5c153fb3134649c7ad67fd034075949434776b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 22:58:55.660530 kubelet[2752]: E1213 22:58:55.660475 2752 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd45b273f9137dbfeceec259cd5c153fb3134649c7ad67fd034075949434776b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7dc99d86df-qrdhj" Dec 13 22:58:55.660909 kubelet[2752]: E1213 22:58:55.660799 2752 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd45b273f9137dbfeceec259cd5c153fb3134649c7ad67fd034075949434776b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7dc99d86df-qrdhj" Dec 13 22:58:55.660909 kubelet[2752]: E1213 22:58:55.660865 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7dc99d86df-qrdhj_calico-system(de321aa1-ac48-4c1f-9093-69f544b891b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7dc99d86df-qrdhj_calico-system(de321aa1-ac48-4c1f-9093-69f544b891b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd45b273f9137dbfeceec259cd5c153fb3134649c7ad67fd034075949434776b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7dc99d86df-qrdhj" podUID="de321aa1-ac48-4c1f-9093-69f544b891b9" Dec 13 22:58:55.662838 containerd[1608]: time="2025-12-13T22:58:55.662803985Z" level=error msg="Failed to destroy network for sandbox \"7a20c30b3607f8db51bb3e260ecaaa514cb28a67a15614001c036217205dae8c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 22:58:55.664731 containerd[1608]: time="2025-12-13T22:58:55.664636299Z" level=error msg="Failed to destroy network for sandbox \"1f2701389ca72ddbb1dca6dc9e940feacbc7410acd8d749b2784e59667e7cb0e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 22:58:55.665028 containerd[1608]: time="2025-12-13T22:58:55.664981538Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vmghn,Uid:9fe07c94-e384-4193-9d4b-ef9d906eb265,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a20c30b3607f8db51bb3e260ecaaa514cb28a67a15614001c036217205dae8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 22:58:55.665245 kubelet[2752]: E1213 22:58:55.665193 2752 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a20c30b3607f8db51bb3e260ecaaa514cb28a67a15614001c036217205dae8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 22:58:55.665307 kubelet[2752]: E1213 22:58:55.665269 2752 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a20c30b3607f8db51bb3e260ecaaa514cb28a67a15614001c036217205dae8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-vmghn" Dec 13 22:58:55.665307 kubelet[2752]: E1213 22:58:55.665289 2752 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a20c30b3607f8db51bb3e260ecaaa514cb28a67a15614001c036217205dae8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-vmghn" Dec 13 22:58:55.665367 kubelet[2752]: E1213 22:58:55.665339 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-vmghn_calico-system(9fe07c94-e384-4193-9d4b-ef9d906eb265)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-vmghn_calico-system(9fe07c94-e384-4193-9d4b-ef9d906eb265)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a20c30b3607f8db51bb3e260ecaaa514cb28a67a15614001c036217205dae8c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-vmghn" podUID="9fe07c94-e384-4193-9d4b-ef9d906eb265" Dec 13 22:58:55.666578 containerd[1608]: time="2025-12-13T22:58:55.666474453Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d74677ddd-w5hbt,Uid:35b16e82-6b93-4598-9c60-bbc71d0b419a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f2701389ca72ddbb1dca6dc9e940feacbc7410acd8d749b2784e59667e7cb0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 22:58:55.666795 kubelet[2752]: E1213 22:58:55.666752 2752 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f2701389ca72ddbb1dca6dc9e940feacbc7410acd8d749b2784e59667e7cb0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 22:58:55.666834 kubelet[2752]: E1213 22:58:55.666813 2752 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f2701389ca72ddbb1dca6dc9e940feacbc7410acd8d749b2784e59667e7cb0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d74677ddd-w5hbt" Dec 13 22:58:55.666866 kubelet[2752]: E1213 22:58:55.666830 2752 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f2701389ca72ddbb1dca6dc9e940feacbc7410acd8d749b2784e59667e7cb0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d74677ddd-w5hbt" Dec 13 22:58:55.666896 kubelet[2752]: E1213 22:58:55.666871 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d74677ddd-w5hbt_calico-apiserver(35b16e82-6b93-4598-9c60-bbc71d0b419a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d74677ddd-w5hbt_calico-apiserver(35b16e82-6b93-4598-9c60-bbc71d0b419a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f2701389ca72ddbb1dca6dc9e940feacbc7410acd8d749b2784e59667e7cb0e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d74677ddd-w5hbt" podUID="35b16e82-6b93-4598-9c60-bbc71d0b419a" Dec 13 22:58:56.036513 systemd[1]: Created slice kubepods-besteffort-pode6e20487_ee64_4317_b075_5244d40e7b5a.slice - libcontainer container kubepods-besteffort-pode6e20487_ee64_4317_b075_5244d40e7b5a.slice. Dec 13 22:58:56.039003 containerd[1608]: time="2025-12-13T22:58:56.038965461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8dxl2,Uid:e6e20487-ee64-4317-b075-5244d40e7b5a,Namespace:calico-system,Attempt:0,}" Dec 13 22:58:56.089316 containerd[1608]: time="2025-12-13T22:58:56.089247474Z" level=error msg="Failed to destroy network for sandbox \"226d8eec432778925cacb5169944b471015fca759a66816066f97158e10aa731\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 22:58:56.091007 containerd[1608]: time="2025-12-13T22:58:56.090962469Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8dxl2,Uid:e6e20487-ee64-4317-b075-5244d40e7b5a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"226d8eec432778925cacb5169944b471015fca759a66816066f97158e10aa731\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 22:58:56.091658 kubelet[2752]: E1213 22:58:56.091245 2752 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"226d8eec432778925cacb5169944b471015fca759a66816066f97158e10aa731\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 22:58:56.091658 kubelet[2752]: E1213 22:58:56.091330 2752 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"226d8eec432778925cacb5169944b471015fca759a66816066f97158e10aa731\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8dxl2" Dec 13 22:58:56.091658 kubelet[2752]: E1213 22:58:56.091353 2752 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"226d8eec432778925cacb5169944b471015fca759a66816066f97158e10aa731\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8dxl2" Dec 13 22:58:56.091831 kubelet[2752]: E1213 22:58:56.091398 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8dxl2_calico-system(e6e20487-ee64-4317-b075-5244d40e7b5a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8dxl2_calico-system(e6e20487-ee64-4317-b075-5244d40e7b5a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"226d8eec432778925cacb5169944b471015fca759a66816066f97158e10aa731\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8dxl2" podUID="e6e20487-ee64-4317-b075-5244d40e7b5a" Dec 13 22:58:56.138443 kubelet[2752]: E1213 22:58:56.138138 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:58:56.139784 containerd[1608]: time="2025-12-13T22:58:56.139747407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 13 22:58:56.380225 systemd[1]: run-netns-cni\x2ded0d1b28\x2d25e2\x2d976b\x2d040c\x2d78a3937b2b98.mount: Deactivated successfully. Dec 13 22:58:56.380329 systemd[1]: run-netns-cni\x2d8f90e3a7\x2d9e0c\x2d8cc5\x2d343c\x2db03c7acdbdca.mount: Deactivated successfully. Dec 13 22:58:56.380375 systemd[1]: run-netns-cni\x2d0aa89dbe\x2d5934\x2d744e\x2d2c32\x2dd0a52cb46841.mount: Deactivated successfully. Dec 13 22:58:56.380418 systemd[1]: run-netns-cni\x2dff446fa7\x2d25f7\x2dfd43\x2d8fee\x2d376869435cb7.mount: Deactivated successfully. Dec 13 22:59:00.141774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount712902670.mount: Deactivated successfully. Dec 13 22:59:00.432711 containerd[1608]: time="2025-12-13T22:59:00.432582937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 22:59:00.433283 containerd[1608]: time="2025-12-13T22:59:00.433214575Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Dec 13 22:59:00.434259 containerd[1608]: time="2025-12-13T22:59:00.434227093Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 22:59:00.436014 containerd[1608]: time="2025-12-13T22:59:00.435982489Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 22:59:00.436644 containerd[1608]: time="2025-12-13T22:59:00.436425088Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.296630121s" Dec 13 22:59:00.436644 containerd[1608]: time="2025-12-13T22:59:00.436458888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 13 22:59:00.459632 containerd[1608]: time="2025-12-13T22:59:00.459406876Z" level=info msg="CreateContainer within sandbox \"d926c12f493931dc5bfbfffa44b05adc568fa123b4f6943556c8ff92e54b8328\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 22:59:00.473128 containerd[1608]: time="2025-12-13T22:59:00.473087965Z" level=info msg="Container 9549f55da41c150494e1040184cf0fd82e17e3f1fc5acca2e04075710f64bc64: CDI devices from CRI Config.CDIDevices: []" Dec 13 22:59:00.485122 containerd[1608]: time="2025-12-13T22:59:00.485063018Z" level=info msg="CreateContainer within sandbox \"d926c12f493931dc5bfbfffa44b05adc568fa123b4f6943556c8ff92e54b8328\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9549f55da41c150494e1040184cf0fd82e17e3f1fc5acca2e04075710f64bc64\"" Dec 13 22:59:00.486834 containerd[1608]: time="2025-12-13T22:59:00.485630457Z" level=info msg="StartContainer for \"9549f55da41c150494e1040184cf0fd82e17e3f1fc5acca2e04075710f64bc64\"" Dec 13 22:59:00.487722 containerd[1608]: time="2025-12-13T22:59:00.487692853Z" level=info msg="connecting to shim 9549f55da41c150494e1040184cf0fd82e17e3f1fc5acca2e04075710f64bc64" address="unix:///run/containerd/s/776a40e8aab695a63079e868f98c76ba56b788fe81bf009a8ca6b8886fd11e91" protocol=ttrpc version=3 Dec 13 22:59:00.504823 systemd[1]: Started cri-containerd-9549f55da41c150494e1040184cf0fd82e17e3f1fc5acca2e04075710f64bc64.scope - libcontainer container 9549f55da41c150494e1040184cf0fd82e17e3f1fc5acca2e04075710f64bc64. Dec 13 22:59:00.577000 audit: BPF prog-id=176 op=LOAD Dec 13 22:59:00.580177 kernel: kauditd_printk_skb: 28 callbacks suppressed Dec 13 22:59:00.580226 kernel: audit: type=1334 audit(1765666740.577:572): prog-id=176 op=LOAD Dec 13 22:59:00.577000 audit[3870]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3287 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:00.584039 kernel: audit: type=1300 audit(1765666740.577:572): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3287 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:00.584091 kernel: audit: type=1327 audit(1765666740.577:572): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935343966353564613431633135303439346531303430313834636630 Dec 13 22:59:00.577000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935343966353564613431633135303439346531303430313834636630 Dec 13 22:59:00.578000 audit: BPF prog-id=177 op=LOAD Dec 13 22:59:00.587766 kernel: audit: type=1334 audit(1765666740.578:573): prog-id=177 op=LOAD Dec 13 22:59:00.578000 audit[3870]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3287 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:00.590967 kernel: audit: type=1300 audit(1765666740.578:573): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3287 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:00.578000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935343966353564613431633135303439346531303430313834636630 Dec 13 22:59:00.594059 kernel: audit: type=1327 audit(1765666740.578:573): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935343966353564613431633135303439346531303430313834636630 Dec 13 22:59:00.594129 kernel: audit: type=1334 audit(1765666740.578:574): prog-id=177 op=UNLOAD Dec 13 22:59:00.578000 audit: BPF prog-id=177 op=UNLOAD Dec 13 22:59:00.578000 audit[3870]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3287 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:00.598382 kernel: audit: type=1300 audit(1765666740.578:574): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3287 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:00.598447 kernel: audit: type=1327 audit(1765666740.578:574): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935343966353564613431633135303439346531303430313834636630 Dec 13 22:59:00.578000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935343966353564613431633135303439346531303430313834636630 Dec 13 22:59:00.578000 audit: BPF prog-id=176 op=UNLOAD Dec 13 22:59:00.602200 kernel: audit: type=1334 audit(1765666740.578:575): prog-id=176 op=UNLOAD Dec 13 22:59:00.578000 audit[3870]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3287 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:00.578000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935343966353564613431633135303439346531303430313834636630 Dec 13 22:59:00.578000 audit: BPF prog-id=178 op=LOAD Dec 13 22:59:00.578000 audit[3870]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3287 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:00.578000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935343966353564613431633135303439346531303430313834636630 Dec 13 22:59:00.615972 containerd[1608]: time="2025-12-13T22:59:00.615935723Z" level=info msg="StartContainer for \"9549f55da41c150494e1040184cf0fd82e17e3f1fc5acca2e04075710f64bc64\" returns successfully" Dec 13 22:59:00.731261 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 22:59:00.731358 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 22:59:00.937518 kubelet[2752]: I1213 22:59:00.937483 2752 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/613acf46-413a-48c0-b66e-dada1b02aafe-whisker-ca-bundle\") pod \"613acf46-413a-48c0-b66e-dada1b02aafe\" (UID: \"613acf46-413a-48c0-b66e-dada1b02aafe\") " Dec 13 22:59:00.937950 kubelet[2752]: I1213 22:59:00.937833 2752 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fm82\" (UniqueName: \"kubernetes.io/projected/613acf46-413a-48c0-b66e-dada1b02aafe-kube-api-access-5fm82\") pod \"613acf46-413a-48c0-b66e-dada1b02aafe\" (UID: \"613acf46-413a-48c0-b66e-dada1b02aafe\") " Dec 13 22:59:00.937950 kubelet[2752]: I1213 22:59:00.937862 2752 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/613acf46-413a-48c0-b66e-dada1b02aafe-whisker-backend-key-pair\") pod \"613acf46-413a-48c0-b66e-dada1b02aafe\" (UID: \"613acf46-413a-48c0-b66e-dada1b02aafe\") " Dec 13 22:59:00.939724 kubelet[2752]: I1213 22:59:00.939690 2752 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/613acf46-413a-48c0-b66e-dada1b02aafe-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "613acf46-413a-48c0-b66e-dada1b02aafe" (UID: "613acf46-413a-48c0-b66e-dada1b02aafe"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 13 22:59:00.946365 kubelet[2752]: I1213 22:59:00.946328 2752 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/613acf46-413a-48c0-b66e-dada1b02aafe-kube-api-access-5fm82" (OuterVolumeSpecName: "kube-api-access-5fm82") pod "613acf46-413a-48c0-b66e-dada1b02aafe" (UID: "613acf46-413a-48c0-b66e-dada1b02aafe"). InnerVolumeSpecName "kube-api-access-5fm82". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 13 22:59:00.948683 kubelet[2752]: I1213 22:59:00.948628 2752 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/613acf46-413a-48c0-b66e-dada1b02aafe-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "613acf46-413a-48c0-b66e-dada1b02aafe" (UID: "613acf46-413a-48c0-b66e-dada1b02aafe"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 13 22:59:01.030656 systemd[1]: Removed slice kubepods-besteffort-pod613acf46_413a_48c0_b66e_dada1b02aafe.slice - libcontainer container kubepods-besteffort-pod613acf46_413a_48c0_b66e_dada1b02aafe.slice. Dec 13 22:59:01.038746 kubelet[2752]: I1213 22:59:01.038693 2752 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5fm82\" (UniqueName: \"kubernetes.io/projected/613acf46-413a-48c0-b66e-dada1b02aafe-kube-api-access-5fm82\") on node \"localhost\" DevicePath \"\"" Dec 13 22:59:01.038746 kubelet[2752]: I1213 22:59:01.038734 2752 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/613acf46-413a-48c0-b66e-dada1b02aafe-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Dec 13 22:59:01.038746 kubelet[2752]: I1213 22:59:01.038745 2752 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/613acf46-413a-48c0-b66e-dada1b02aafe-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Dec 13 22:59:01.142712 systemd[1]: var-lib-kubelet-pods-613acf46\x2d413a\x2d48c0\x2db66e\x2ddada1b02aafe-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5fm82.mount: Deactivated successfully. Dec 13 22:59:01.142813 systemd[1]: var-lib-kubelet-pods-613acf46\x2d413a\x2d48c0\x2db66e\x2ddada1b02aafe-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 13 22:59:01.154664 kubelet[2752]: E1213 22:59:01.154606 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:59:01.172374 kubelet[2752]: I1213 22:59:01.172300 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-r98vd" podStartSLOduration=1.281506215 podStartE2EDuration="13.172278574s" podCreationTimestamp="2025-12-13 22:58:48 +0000 UTC" firstStartedPulling="2025-12-13 22:58:48.550243879 +0000 UTC m=+25.664004323" lastFinishedPulling="2025-12-13 22:59:00.441016238 +0000 UTC m=+37.554776682" observedRunningTime="2025-12-13 22:59:01.171361336 +0000 UTC m=+38.285121780" watchObservedRunningTime="2025-12-13 22:59:01.172278574 +0000 UTC m=+38.286039018" Dec 13 22:59:01.264049 systemd[1]: Created slice kubepods-besteffort-podd1f05f6c_e0c0_404f_9df9_6993eb6f715c.slice - libcontainer container kubepods-besteffort-podd1f05f6c_e0c0_404f_9df9_6993eb6f715c.slice. Dec 13 22:59:01.339983 kubelet[2752]: I1213 22:59:01.339930 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1f05f6c-e0c0-404f-9df9-6993eb6f715c-whisker-ca-bundle\") pod \"whisker-7bc476bc64-64mwd\" (UID: \"d1f05f6c-e0c0-404f-9df9-6993eb6f715c\") " pod="calico-system/whisker-7bc476bc64-64mwd" Dec 13 22:59:01.339983 kubelet[2752]: I1213 22:59:01.339985 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqvhr\" (UniqueName: \"kubernetes.io/projected/d1f05f6c-e0c0-404f-9df9-6993eb6f715c-kube-api-access-nqvhr\") pod \"whisker-7bc476bc64-64mwd\" (UID: \"d1f05f6c-e0c0-404f-9df9-6993eb6f715c\") " pod="calico-system/whisker-7bc476bc64-64mwd" Dec 13 22:59:01.340141 kubelet[2752]: I1213 22:59:01.340012 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d1f05f6c-e0c0-404f-9df9-6993eb6f715c-whisker-backend-key-pair\") pod \"whisker-7bc476bc64-64mwd\" (UID: \"d1f05f6c-e0c0-404f-9df9-6993eb6f715c\") " pod="calico-system/whisker-7bc476bc64-64mwd" Dec 13 22:59:01.570971 containerd[1608]: time="2025-12-13T22:59:01.570890891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bc476bc64-64mwd,Uid:d1f05f6c-e0c0-404f-9df9-6993eb6f715c,Namespace:calico-system,Attempt:0,}" Dec 13 22:59:01.573734 systemd[1]: Started sshd@7-10.0.0.10:22-10.0.0.1:59386.service - OpenSSH per-connection server daemon (10.0.0.1:59386). Dec 13 22:59:01.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.10:22-10.0.0.1:59386 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:59:01.656000 audit[3936]: USER_ACCT pid=3936 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:01.658048 sshd[3936]: Accepted publickey for core from 10.0.0.1 port 59386 ssh2: RSA SHA256:wrASvn4TPBLeGSBdJR0bjeHJhgtBBrNwNgMNeW/n+/Q Dec 13 22:59:01.657000 audit[3936]: CRED_ACQ pid=3936 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:01.657000 audit[3936]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd8f50400 a2=3 a3=0 items=0 ppid=1 pid=3936 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:01.657000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 22:59:01.659901 sshd-session[3936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 22:59:01.664307 systemd-logind[1585]: New session 9 of user core. Dec 13 22:59:01.673770 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 22:59:01.674000 audit[3936]: USER_START pid=3936 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:01.676000 audit[3959]: CRED_ACQ pid=3959 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:01.778976 systemd-networkd[1290]: cali707bf39fbe7: Link UP Dec 13 22:59:01.781774 systemd-networkd[1290]: cali707bf39fbe7: Gained carrier Dec 13 22:59:01.800761 containerd[1608]: 2025-12-13 22:59:01.614 [INFO][3938] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 22:59:01.800761 containerd[1608]: 2025-12-13 22:59:01.648 [INFO][3938] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7bc476bc64--64mwd-eth0 whisker-7bc476bc64- calico-system d1f05f6c-e0c0-404f-9df9-6993eb6f715c 915 0 2025-12-13 22:59:01 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7bc476bc64 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7bc476bc64-64mwd eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali707bf39fbe7 [] [] }} ContainerID="3156ee978f868f0ab6b9ffa97f2605358476be91f91953b12efee985bfccb374" Namespace="calico-system" Pod="whisker-7bc476bc64-64mwd" WorkloadEndpoint="localhost-k8s-whisker--7bc476bc64--64mwd-" Dec 13 22:59:01.800761 containerd[1608]: 2025-12-13 22:59:01.649 [INFO][3938] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3156ee978f868f0ab6b9ffa97f2605358476be91f91953b12efee985bfccb374" Namespace="calico-system" Pod="whisker-7bc476bc64-64mwd" WorkloadEndpoint="localhost-k8s-whisker--7bc476bc64--64mwd-eth0" Dec 13 22:59:01.800761 containerd[1608]: 2025-12-13 22:59:01.729 [INFO][3953] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3156ee978f868f0ab6b9ffa97f2605358476be91f91953b12efee985bfccb374" HandleID="k8s-pod-network.3156ee978f868f0ab6b9ffa97f2605358476be91f91953b12efee985bfccb374" Workload="localhost-k8s-whisker--7bc476bc64--64mwd-eth0" Dec 13 22:59:01.803105 containerd[1608]: 2025-12-13 22:59:01.730 [INFO][3953] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3156ee978f868f0ab6b9ffa97f2605358476be91f91953b12efee985bfccb374" HandleID="k8s-pod-network.3156ee978f868f0ab6b9ffa97f2605358476be91f91953b12efee985bfccb374" Workload="localhost-k8s-whisker--7bc476bc64--64mwd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c840), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7bc476bc64-64mwd", "timestamp":"2025-12-13 22:59:01.729841595 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 22:59:01.803105 containerd[1608]: 2025-12-13 22:59:01.730 [INFO][3953] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 22:59:01.803105 containerd[1608]: 2025-12-13 22:59:01.730 [INFO][3953] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 22:59:01.803105 containerd[1608]: 2025-12-13 22:59:01.730 [INFO][3953] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 22:59:01.803105 containerd[1608]: 2025-12-13 22:59:01.741 [INFO][3953] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3156ee978f868f0ab6b9ffa97f2605358476be91f91953b12efee985bfccb374" host="localhost" Dec 13 22:59:01.803105 containerd[1608]: 2025-12-13 22:59:01.747 [INFO][3953] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 22:59:01.803105 containerd[1608]: 2025-12-13 22:59:01.751 [INFO][3953] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 22:59:01.803105 containerd[1608]: 2025-12-13 22:59:01.753 [INFO][3953] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 22:59:01.803105 containerd[1608]: 2025-12-13 22:59:01.755 [INFO][3953] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 22:59:01.803105 containerd[1608]: 2025-12-13 22:59:01.755 [INFO][3953] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3156ee978f868f0ab6b9ffa97f2605358476be91f91953b12efee985bfccb374" host="localhost" Dec 13 22:59:01.803387 containerd[1608]: 2025-12-13 22:59:01.756 [INFO][3953] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3156ee978f868f0ab6b9ffa97f2605358476be91f91953b12efee985bfccb374 Dec 13 22:59:01.803387 containerd[1608]: 2025-12-13 22:59:01.760 [INFO][3953] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3156ee978f868f0ab6b9ffa97f2605358476be91f91953b12efee985bfccb374" host="localhost" Dec 13 22:59:01.803387 containerd[1608]: 2025-12-13 22:59:01.765 [INFO][3953] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.3156ee978f868f0ab6b9ffa97f2605358476be91f91953b12efee985bfccb374" host="localhost" Dec 13 22:59:01.803387 containerd[1608]: 2025-12-13 22:59:01.765 [INFO][3953] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.3156ee978f868f0ab6b9ffa97f2605358476be91f91953b12efee985bfccb374" host="localhost" Dec 13 22:59:01.803387 containerd[1608]: 2025-12-13 22:59:01.765 [INFO][3953] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 22:59:01.803387 containerd[1608]: 2025-12-13 22:59:01.765 [INFO][3953] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="3156ee978f868f0ab6b9ffa97f2605358476be91f91953b12efee985bfccb374" HandleID="k8s-pod-network.3156ee978f868f0ab6b9ffa97f2605358476be91f91953b12efee985bfccb374" Workload="localhost-k8s-whisker--7bc476bc64--64mwd-eth0" Dec 13 22:59:01.803503 containerd[1608]: 2025-12-13 22:59:01.768 [INFO][3938] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3156ee978f868f0ab6b9ffa97f2605358476be91f91953b12efee985bfccb374" Namespace="calico-system" Pod="whisker-7bc476bc64-64mwd" WorkloadEndpoint="localhost-k8s-whisker--7bc476bc64--64mwd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7bc476bc64--64mwd-eth0", GenerateName:"whisker-7bc476bc64-", Namespace:"calico-system", SelfLink:"", UID:"d1f05f6c-e0c0-404f-9df9-6993eb6f715c", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 22, 59, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7bc476bc64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7bc476bc64-64mwd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali707bf39fbe7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 22:59:01.803503 containerd[1608]: 2025-12-13 22:59:01.768 [INFO][3938] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="3156ee978f868f0ab6b9ffa97f2605358476be91f91953b12efee985bfccb374" Namespace="calico-system" Pod="whisker-7bc476bc64-64mwd" WorkloadEndpoint="localhost-k8s-whisker--7bc476bc64--64mwd-eth0" Dec 13 22:59:01.804212 containerd[1608]: 2025-12-13 22:59:01.768 [INFO][3938] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali707bf39fbe7 ContainerID="3156ee978f868f0ab6b9ffa97f2605358476be91f91953b12efee985bfccb374" Namespace="calico-system" Pod="whisker-7bc476bc64-64mwd" WorkloadEndpoint="localhost-k8s-whisker--7bc476bc64--64mwd-eth0" Dec 13 22:59:01.804212 containerd[1608]: 2025-12-13 22:59:01.781 [INFO][3938] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3156ee978f868f0ab6b9ffa97f2605358476be91f91953b12efee985bfccb374" Namespace="calico-system" Pod="whisker-7bc476bc64-64mwd" WorkloadEndpoint="localhost-k8s-whisker--7bc476bc64--64mwd-eth0" Dec 13 22:59:01.804353 containerd[1608]: 2025-12-13 22:59:01.782 [INFO][3938] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3156ee978f868f0ab6b9ffa97f2605358476be91f91953b12efee985bfccb374" Namespace="calico-system" Pod="whisker-7bc476bc64-64mwd" WorkloadEndpoint="localhost-k8s-whisker--7bc476bc64--64mwd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7bc476bc64--64mwd-eth0", GenerateName:"whisker-7bc476bc64-", Namespace:"calico-system", SelfLink:"", UID:"d1f05f6c-e0c0-404f-9df9-6993eb6f715c", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 22, 59, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7bc476bc64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3156ee978f868f0ab6b9ffa97f2605358476be91f91953b12efee985bfccb374", Pod:"whisker-7bc476bc64-64mwd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali707bf39fbe7", MAC:"9e:13:72:7b:8c:44", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 22:59:01.804430 containerd[1608]: 2025-12-13 22:59:01.796 [INFO][3938] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3156ee978f868f0ab6b9ffa97f2605358476be91f91953b12efee985bfccb374" Namespace="calico-system" Pod="whisker-7bc476bc64-64mwd" WorkloadEndpoint="localhost-k8s-whisker--7bc476bc64--64mwd-eth0" Dec 13 22:59:01.866844 containerd[1608]: time="2025-12-13T22:59:01.865820628Z" level=info msg="connecting to shim 3156ee978f868f0ab6b9ffa97f2605358476be91f91953b12efee985bfccb374" address="unix:///run/containerd/s/679fc4828bbeb6b9f050e29e2c4c5eaa83013722b0b777efb45f6cf0bc046b1d" namespace=k8s.io protocol=ttrpc version=3 Dec 13 22:59:01.867437 sshd[3959]: Connection closed by 10.0.0.1 port 59386 Dec 13 22:59:01.868012 sshd-session[3936]: pam_unix(sshd:session): session closed for user core Dec 13 22:59:01.867000 audit[3936]: USER_END pid=3936 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:01.868000 audit[3936]: CRED_DISP pid=3936 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:01.872824 systemd[1]: sshd@7-10.0.0.10:22-10.0.0.1:59386.service: Deactivated successfully. Dec 13 22:59:01.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.10:22-10.0.0.1:59386 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:59:01.875149 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 22:59:01.876799 systemd-logind[1585]: Session 9 logged out. Waiting for processes to exit. Dec 13 22:59:01.877706 systemd-logind[1585]: Removed session 9. Dec 13 22:59:01.891752 systemd[1]: Started cri-containerd-3156ee978f868f0ab6b9ffa97f2605358476be91f91953b12efee985bfccb374.scope - libcontainer container 3156ee978f868f0ab6b9ffa97f2605358476be91f91953b12efee985bfccb374. Dec 13 22:59:01.900000 audit: BPF prog-id=179 op=LOAD Dec 13 22:59:01.901000 audit: BPF prog-id=180 op=LOAD Dec 13 22:59:01.901000 audit[4013]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4001 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:01.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331353665653937386638363866306162366239666661393766323630 Dec 13 22:59:01.901000 audit: BPF prog-id=180 op=UNLOAD Dec 13 22:59:01.901000 audit[4013]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4001 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:01.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331353665653937386638363866306162366239666661393766323630 Dec 13 22:59:01.901000 audit: BPF prog-id=181 op=LOAD Dec 13 22:59:01.901000 audit[4013]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4001 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:01.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331353665653937386638363866306162366239666661393766323630 Dec 13 22:59:01.901000 audit: BPF prog-id=182 op=LOAD Dec 13 22:59:01.901000 audit[4013]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4001 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:01.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331353665653937386638363866306162366239666661393766323630 Dec 13 22:59:01.901000 audit: BPF prog-id=182 op=UNLOAD Dec 13 22:59:01.901000 audit[4013]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4001 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:01.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331353665653937386638363866306162366239666661393766323630 Dec 13 22:59:01.901000 audit: BPF prog-id=181 op=UNLOAD Dec 13 22:59:01.901000 audit[4013]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4001 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:01.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331353665653937386638363866306162366239666661393766323630 Dec 13 22:59:01.901000 audit: BPF prog-id=183 op=LOAD Dec 13 22:59:01.901000 audit[4013]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4001 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:01.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331353665653937386638363866306162366239666661393766323630 Dec 13 22:59:01.903646 systemd-resolved[1249]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 22:59:01.928342 containerd[1608]: time="2025-12-13T22:59:01.928295776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bc476bc64-64mwd,Uid:d1f05f6c-e0c0-404f-9df9-6993eb6f715c,Namespace:calico-system,Attempt:0,} returns sandbox id \"3156ee978f868f0ab6b9ffa97f2605358476be91f91953b12efee985bfccb374\"" Dec 13 22:59:01.931824 containerd[1608]: time="2025-12-13T22:59:01.931771049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 13 22:59:02.139195 containerd[1608]: time="2025-12-13T22:59:02.139026269Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 22:59:02.141711 containerd[1608]: time="2025-12-13T22:59:02.141589024Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 13 22:59:02.141940 containerd[1608]: time="2025-12-13T22:59:02.141843783Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 13 22:59:02.142371 kubelet[2752]: E1213 22:59:02.142243 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 13 22:59:02.143259 kubelet[2752]: E1213 22:59:02.142962 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 13 22:59:02.147202 kubelet[2752]: E1213 22:59:02.146981 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:029548d4e33045ac9fd2ec7d416edc31,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nqvhr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7bc476bc64-64mwd_calico-system(d1f05f6c-e0c0-404f-9df9-6993eb6f715c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 13 22:59:02.149240 containerd[1608]: time="2025-12-13T22:59:02.149177769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 13 22:59:02.160298 kubelet[2752]: E1213 22:59:02.160238 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:59:02.204000 audit: BPF prog-id=184 op=LOAD Dec 13 22:59:02.204000 audit[4179]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffffeeb6e8 a2=98 a3=ffffffeeb6d8 items=0 ppid=4052 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.204000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 13 22:59:02.204000 audit: BPF prog-id=184 op=UNLOAD Dec 13 22:59:02.204000 audit[4179]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffffeeb6b8 a3=0 items=0 ppid=4052 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.204000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 13 22:59:02.204000 audit: BPF prog-id=185 op=LOAD Dec 13 22:59:02.204000 audit[4179]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffffeeb598 a2=74 a3=95 items=0 ppid=4052 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.204000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 13 22:59:02.204000 audit: BPF prog-id=185 op=UNLOAD Dec 13 22:59:02.204000 audit[4179]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4052 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.204000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 13 22:59:02.204000 audit: BPF prog-id=186 op=LOAD Dec 13 22:59:02.204000 audit[4179]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffffeeb5c8 a2=40 a3=ffffffeeb5f8 items=0 ppid=4052 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.204000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 13 22:59:02.204000 audit: BPF prog-id=186 op=UNLOAD Dec 13 22:59:02.204000 audit[4179]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffffeeb5f8 items=0 ppid=4052 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.204000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 13 22:59:02.206000 audit: BPF prog-id=187 op=LOAD Dec 13 22:59:02.206000 audit[4184]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffefd66f88 a2=98 a3=ffffefd66f78 items=0 ppid=4052 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.206000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 22:59:02.207000 audit: BPF prog-id=187 op=UNLOAD Dec 13 22:59:02.207000 audit[4184]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffefd66f58 a3=0 items=0 ppid=4052 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.207000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 22:59:02.207000 audit: BPF prog-id=188 op=LOAD Dec 13 22:59:02.207000 audit[4184]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffefd66c18 a2=74 a3=95 items=0 ppid=4052 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.207000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 22:59:02.207000 audit: BPF prog-id=188 op=UNLOAD Dec 13 22:59:02.207000 audit[4184]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4052 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.207000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 22:59:02.207000 audit: BPF prog-id=189 op=LOAD Dec 13 22:59:02.207000 audit[4184]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffefd66c78 a2=94 a3=2 items=0 ppid=4052 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.207000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 22:59:02.208000 audit: BPF prog-id=189 op=UNLOAD Dec 13 22:59:02.208000 audit[4184]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4052 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.208000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 22:59:02.327000 audit: BPF prog-id=190 op=LOAD Dec 13 22:59:02.327000 audit[4184]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffefd66c38 a2=40 a3=ffffefd66c68 items=0 ppid=4052 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.327000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 22:59:02.327000 audit: BPF prog-id=190 op=UNLOAD Dec 13 22:59:02.327000 audit[4184]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffefd66c68 items=0 ppid=4052 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.327000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 22:59:02.336000 audit: BPF prog-id=191 op=LOAD Dec 13 22:59:02.336000 audit[4184]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffefd66c48 a2=94 a3=4 items=0 ppid=4052 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.336000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 22:59:02.336000 audit: BPF prog-id=191 op=UNLOAD Dec 13 22:59:02.336000 audit[4184]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4052 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.336000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 22:59:02.337000 audit: BPF prog-id=192 op=LOAD Dec 13 22:59:02.337000 audit[4184]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffefd66a88 a2=94 a3=5 items=0 ppid=4052 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.337000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 22:59:02.337000 audit: BPF prog-id=192 op=UNLOAD Dec 13 22:59:02.337000 audit[4184]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4052 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.337000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 22:59:02.337000 audit: BPF prog-id=193 op=LOAD Dec 13 22:59:02.337000 audit[4184]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffefd66cb8 a2=94 a3=6 items=0 ppid=4052 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.337000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 22:59:02.337000 audit: BPF prog-id=193 op=UNLOAD Dec 13 22:59:02.337000 audit[4184]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4052 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.337000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 22:59:02.337000 audit: BPF prog-id=194 op=LOAD Dec 13 22:59:02.337000 audit[4184]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffefd66488 a2=94 a3=83 items=0 ppid=4052 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.337000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 22:59:02.337000 audit: BPF prog-id=195 op=LOAD Dec 13 22:59:02.337000 audit[4184]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffefd66248 a2=94 a3=2 items=0 ppid=4052 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.337000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 22:59:02.337000 audit: BPF prog-id=195 op=UNLOAD Dec 13 22:59:02.337000 audit[4184]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4052 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.337000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 22:59:02.338000 audit: BPF prog-id=194 op=UNLOAD Dec 13 22:59:02.338000 audit[4184]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=10d58620 a3=10d4bb00 items=0 ppid=4052 pid=4184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.338000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 22:59:02.348000 audit: BPF prog-id=196 op=LOAD Dec 13 22:59:02.348000 audit[4198]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff0c0e3e8 a2=98 a3=fffff0c0e3d8 items=0 ppid=4052 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.348000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 22:59:02.348000 audit: BPF prog-id=196 op=UNLOAD Dec 13 22:59:02.348000 audit[4198]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff0c0e3b8 a3=0 items=0 ppid=4052 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.348000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 22:59:02.348000 audit: BPF prog-id=197 op=LOAD Dec 13 22:59:02.348000 audit[4198]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff0c0e298 a2=74 a3=95 items=0 ppid=4052 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.348000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 22:59:02.348000 audit: BPF prog-id=197 op=UNLOAD Dec 13 22:59:02.348000 audit[4198]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4052 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.348000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 22:59:02.348000 audit: BPF prog-id=198 op=LOAD Dec 13 22:59:02.348000 audit[4198]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff0c0e2c8 a2=40 a3=fffff0c0e2f8 items=0 ppid=4052 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.348000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 22:59:02.348000 audit: BPF prog-id=198 op=UNLOAD Dec 13 22:59:02.348000 audit[4198]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=fffff0c0e2f8 items=0 ppid=4052 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.348000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 22:59:02.358977 containerd[1608]: time="2025-12-13T22:59:02.358902353Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 22:59:02.371567 containerd[1608]: time="2025-12-13T22:59:02.371484728Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 13 22:59:02.371664 containerd[1608]: time="2025-12-13T22:59:02.371486968Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 13 22:59:02.371864 kubelet[2752]: E1213 22:59:02.371811 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 13 22:59:02.371941 kubelet[2752]: E1213 22:59:02.371878 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 13 22:59:02.372073 kubelet[2752]: E1213 22:59:02.372021 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nqvhr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7bc476bc64-64mwd_calico-system(d1f05f6c-e0c0-404f-9df9-6993eb6f715c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 13 22:59:02.373354 kubelet[2752]: E1213 22:59:02.373292 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7bc476bc64-64mwd" podUID="d1f05f6c-e0c0-404f-9df9-6993eb6f715c" Dec 13 22:59:02.418853 systemd-networkd[1290]: vxlan.calico: Link UP Dec 13 22:59:02.418859 systemd-networkd[1290]: vxlan.calico: Gained carrier Dec 13 22:59:02.432000 audit: BPF prog-id=199 op=LOAD Dec 13 22:59:02.432000 audit[4223]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd75156c8 a2=98 a3=ffffd75156b8 items=0 ppid=4052 pid=4223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.432000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 22:59:02.432000 audit: BPF prog-id=199 op=UNLOAD Dec 13 22:59:02.432000 audit[4223]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd7515698 a3=0 items=0 ppid=4052 pid=4223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.432000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 22:59:02.432000 audit: BPF prog-id=200 op=LOAD Dec 13 22:59:02.432000 audit[4223]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd75153a8 a2=74 a3=95 items=0 ppid=4052 pid=4223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.432000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 22:59:02.432000 audit: BPF prog-id=200 op=UNLOAD Dec 13 22:59:02.432000 audit[4223]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4052 pid=4223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.432000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 22:59:02.432000 audit: BPF prog-id=201 op=LOAD Dec 13 22:59:02.432000 audit[4223]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd7515408 a2=94 a3=2 items=0 ppid=4052 pid=4223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.432000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 22:59:02.432000 audit: BPF prog-id=201 op=UNLOAD Dec 13 22:59:02.432000 audit[4223]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4052 pid=4223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.432000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 22:59:02.432000 audit: BPF prog-id=202 op=LOAD Dec 13 22:59:02.432000 audit[4223]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd7515288 a2=40 a3=ffffd75152b8 items=0 ppid=4052 pid=4223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.432000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 22:59:02.432000 audit: BPF prog-id=202 op=UNLOAD Dec 13 22:59:02.432000 audit[4223]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffd75152b8 items=0 ppid=4052 pid=4223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.432000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 22:59:02.432000 audit: BPF prog-id=203 op=LOAD Dec 13 22:59:02.432000 audit[4223]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd75153d8 a2=94 a3=b7 items=0 ppid=4052 pid=4223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.432000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 22:59:02.432000 audit: BPF prog-id=203 op=UNLOAD Dec 13 22:59:02.432000 audit[4223]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4052 pid=4223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.432000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 22:59:02.434000 audit: BPF prog-id=204 op=LOAD Dec 13 22:59:02.434000 audit[4223]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd7514a88 a2=94 a3=2 items=0 ppid=4052 pid=4223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.434000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 22:59:02.434000 audit: BPF prog-id=204 op=UNLOAD Dec 13 22:59:02.434000 audit[4223]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4052 pid=4223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.434000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 22:59:02.434000 audit: BPF prog-id=205 op=LOAD Dec 13 22:59:02.434000 audit[4223]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd7514c18 a2=94 a3=30 items=0 ppid=4052 pid=4223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.434000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 22:59:02.440000 audit: BPF prog-id=206 op=LOAD Dec 13 22:59:02.440000 audit[4227]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcd353178 a2=98 a3=ffffcd353168 items=0 ppid=4052 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.440000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 22:59:02.440000 audit: BPF prog-id=206 op=UNLOAD Dec 13 22:59:02.440000 audit[4227]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffcd353148 a3=0 items=0 ppid=4052 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.440000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 22:59:02.440000 audit: BPF prog-id=207 op=LOAD Dec 13 22:59:02.440000 audit[4227]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcd352e08 a2=74 a3=95 items=0 ppid=4052 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.440000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 22:59:02.440000 audit: BPF prog-id=207 op=UNLOAD Dec 13 22:59:02.440000 audit[4227]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4052 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.440000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 22:59:02.440000 audit: BPF prog-id=208 op=LOAD Dec 13 22:59:02.440000 audit[4227]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcd352e68 a2=94 a3=2 items=0 ppid=4052 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.440000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 22:59:02.440000 audit: BPF prog-id=208 op=UNLOAD Dec 13 22:59:02.440000 audit[4227]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4052 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.440000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 22:59:02.541000 audit: BPF prog-id=209 op=LOAD Dec 13 22:59:02.541000 audit[4227]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcd352e28 a2=40 a3=ffffcd352e58 items=0 ppid=4052 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.541000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 22:59:02.541000 audit: BPF prog-id=209 op=UNLOAD Dec 13 22:59:02.541000 audit[4227]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffcd352e58 items=0 ppid=4052 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.541000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 22:59:02.551000 audit: BPF prog-id=210 op=LOAD Dec 13 22:59:02.551000 audit[4227]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffcd352e38 a2=94 a3=4 items=0 ppid=4052 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.551000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 22:59:02.552000 audit: BPF prog-id=210 op=UNLOAD Dec 13 22:59:02.552000 audit[4227]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4052 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.552000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 22:59:02.552000 audit: BPF prog-id=211 op=LOAD Dec 13 22:59:02.552000 audit[4227]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffcd352c78 a2=94 a3=5 items=0 ppid=4052 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.552000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 22:59:02.552000 audit: BPF prog-id=211 op=UNLOAD Dec 13 22:59:02.552000 audit[4227]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4052 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.552000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 22:59:02.552000 audit: BPF prog-id=212 op=LOAD Dec 13 22:59:02.552000 audit[4227]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffcd352ea8 a2=94 a3=6 items=0 ppid=4052 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.552000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 22:59:02.552000 audit: BPF prog-id=212 op=UNLOAD Dec 13 22:59:02.552000 audit[4227]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4052 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.552000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 22:59:02.552000 audit: BPF prog-id=213 op=LOAD Dec 13 22:59:02.552000 audit[4227]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffcd352678 a2=94 a3=83 items=0 ppid=4052 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.552000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 22:59:02.553000 audit: BPF prog-id=214 op=LOAD Dec 13 22:59:02.553000 audit[4227]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffcd352438 a2=94 a3=2 items=0 ppid=4052 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.553000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 22:59:02.553000 audit: BPF prog-id=214 op=UNLOAD Dec 13 22:59:02.553000 audit[4227]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4052 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.553000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 22:59:02.553000 audit: BPF prog-id=213 op=UNLOAD Dec 13 22:59:02.553000 audit[4227]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=6d2620 a3=6c5b00 items=0 ppid=4052 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.553000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 22:59:02.562000 audit: BPF prog-id=205 op=UNLOAD Dec 13 22:59:02.562000 audit[4052]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=4000def240 a2=0 a3=0 items=0 ppid=4046 pid=4052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.562000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 13 22:59:02.604000 audit[4255]: NETFILTER_CFG table=nat:121 family=2 entries=15 op=nft_register_chain pid=4255 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 22:59:02.604000 audit[4255]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffc58354d0 a2=0 a3=ffffb4559fa8 items=0 ppid=4052 pid=4255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.604000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 22:59:02.604000 audit[4254]: NETFILTER_CFG table=mangle:122 family=2 entries=16 op=nft_register_chain pid=4254 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 22:59:02.604000 audit[4254]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffc3545c40 a2=0 a3=ffffa26befa8 items=0 ppid=4052 pid=4254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.604000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 22:59:02.610000 audit[4253]: NETFILTER_CFG table=raw:123 family=2 entries=21 op=nft_register_chain pid=4253 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 22:59:02.610000 audit[4253]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffdbae6610 a2=0 a3=ffff8a86ffa8 items=0 ppid=4052 pid=4253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.610000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 22:59:02.615000 audit[4257]: NETFILTER_CFG table=filter:124 family=2 entries=94 op=nft_register_chain pid=4257 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 22:59:02.615000 audit[4257]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=ffffdc392b70 a2=0 a3=ffffbb64bfa8 items=0 ppid=4052 pid=4257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:02.615000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 22:59:03.025764 kubelet[2752]: I1213 22:59:03.025727 2752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="613acf46-413a-48c0-b66e-dada1b02aafe" path="/var/lib/kubelet/pods/613acf46-413a-48c0-b66e-dada1b02aafe/volumes" Dec 13 22:59:03.160405 kubelet[2752]: E1213 22:59:03.160374 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:59:03.161340 kubelet[2752]: E1213 22:59:03.161054 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7bc476bc64-64mwd" podUID="d1f05f6c-e0c0-404f-9df9-6993eb6f715c" Dec 13 22:59:03.207000 audit[4291]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=4291 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:59:03.207000 audit[4291]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe0dd10f0 a2=0 a3=1 items=0 ppid=2863 pid=4291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:03.207000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:59:03.215000 audit[4291]: NETFILTER_CFG table=nat:126 family=2 entries=14 op=nft_register_rule pid=4291 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:59:03.215000 audit[4291]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffe0dd10f0 a2=0 a3=1 items=0 ppid=2863 pid=4291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:03.215000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:59:03.433732 systemd-networkd[1290]: cali707bf39fbe7: Gained IPv6LL Dec 13 22:59:03.625721 systemd-networkd[1290]: vxlan.calico: Gained IPv6LL Dec 13 22:59:06.022548 kubelet[2752]: E1213 22:59:06.022511 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:59:06.023171 containerd[1608]: time="2025-12-13T22:59:06.023129097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7psl5,Uid:8456e800-6c4f-44dd-baf5-c2d42ab5dd0e,Namespace:kube-system,Attempt:0,}" Dec 13 22:59:06.130848 systemd-networkd[1290]: calidfbd903f0db: Link UP Dec 13 22:59:06.131538 systemd-networkd[1290]: calidfbd903f0db: Gained carrier Dec 13 22:59:06.143155 containerd[1608]: 2025-12-13 22:59:06.064 [INFO][4297] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--7psl5-eth0 coredns-668d6bf9bc- kube-system 8456e800-6c4f-44dd-baf5-c2d42ab5dd0e 834 0 2025-12-13 22:58:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-7psl5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidfbd903f0db [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4ca21dc7f78a6915e69430027083fc1859a5af3b64cbd2bbb2264ca0d9183dcd" Namespace="kube-system" Pod="coredns-668d6bf9bc-7psl5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7psl5-" Dec 13 22:59:06.143155 containerd[1608]: 2025-12-13 22:59:06.064 [INFO][4297] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4ca21dc7f78a6915e69430027083fc1859a5af3b64cbd2bbb2264ca0d9183dcd" Namespace="kube-system" Pod="coredns-668d6bf9bc-7psl5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7psl5-eth0" Dec 13 22:59:06.143155 containerd[1608]: 2025-12-13 22:59:06.088 [INFO][4312] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4ca21dc7f78a6915e69430027083fc1859a5af3b64cbd2bbb2264ca0d9183dcd" HandleID="k8s-pod-network.4ca21dc7f78a6915e69430027083fc1859a5af3b64cbd2bbb2264ca0d9183dcd" Workload="localhost-k8s-coredns--668d6bf9bc--7psl5-eth0" Dec 13 22:59:06.143348 containerd[1608]: 2025-12-13 22:59:06.088 [INFO][4312] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4ca21dc7f78a6915e69430027083fc1859a5af3b64cbd2bbb2264ca0d9183dcd" HandleID="k8s-pod-network.4ca21dc7f78a6915e69430027083fc1859a5af3b64cbd2bbb2264ca0d9183dcd" Workload="localhost-k8s-coredns--668d6bf9bc--7psl5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000494990), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-7psl5", "timestamp":"2025-12-13 22:59:06.088781476 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 22:59:06.143348 containerd[1608]: 2025-12-13 22:59:06.088 [INFO][4312] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 22:59:06.143348 containerd[1608]: 2025-12-13 22:59:06.089 [INFO][4312] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 22:59:06.143348 containerd[1608]: 2025-12-13 22:59:06.089 [INFO][4312] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 22:59:06.143348 containerd[1608]: 2025-12-13 22:59:06.099 [INFO][4312] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4ca21dc7f78a6915e69430027083fc1859a5af3b64cbd2bbb2264ca0d9183dcd" host="localhost" Dec 13 22:59:06.143348 containerd[1608]: 2025-12-13 22:59:06.103 [INFO][4312] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 22:59:06.143348 containerd[1608]: 2025-12-13 22:59:06.108 [INFO][4312] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 22:59:06.143348 containerd[1608]: 2025-12-13 22:59:06.111 [INFO][4312] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 22:59:06.143348 containerd[1608]: 2025-12-13 22:59:06.113 [INFO][4312] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 22:59:06.143348 containerd[1608]: 2025-12-13 22:59:06.113 [INFO][4312] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4ca21dc7f78a6915e69430027083fc1859a5af3b64cbd2bbb2264ca0d9183dcd" host="localhost" Dec 13 22:59:06.143756 containerd[1608]: 2025-12-13 22:59:06.115 [INFO][4312] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4ca21dc7f78a6915e69430027083fc1859a5af3b64cbd2bbb2264ca0d9183dcd Dec 13 22:59:06.143756 containerd[1608]: 2025-12-13 22:59:06.118 [INFO][4312] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4ca21dc7f78a6915e69430027083fc1859a5af3b64cbd2bbb2264ca0d9183dcd" host="localhost" Dec 13 22:59:06.143756 containerd[1608]: 2025-12-13 22:59:06.124 [INFO][4312] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.4ca21dc7f78a6915e69430027083fc1859a5af3b64cbd2bbb2264ca0d9183dcd" host="localhost" Dec 13 22:59:06.143756 containerd[1608]: 2025-12-13 22:59:06.124 [INFO][4312] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.4ca21dc7f78a6915e69430027083fc1859a5af3b64cbd2bbb2264ca0d9183dcd" host="localhost" Dec 13 22:59:06.143756 containerd[1608]: 2025-12-13 22:59:06.124 [INFO][4312] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 22:59:06.143756 containerd[1608]: 2025-12-13 22:59:06.124 [INFO][4312] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="4ca21dc7f78a6915e69430027083fc1859a5af3b64cbd2bbb2264ca0d9183dcd" HandleID="k8s-pod-network.4ca21dc7f78a6915e69430027083fc1859a5af3b64cbd2bbb2264ca0d9183dcd" Workload="localhost-k8s-coredns--668d6bf9bc--7psl5-eth0" Dec 13 22:59:06.143862 containerd[1608]: 2025-12-13 22:59:06.127 [INFO][4297] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4ca21dc7f78a6915e69430027083fc1859a5af3b64cbd2bbb2264ca0d9183dcd" Namespace="kube-system" Pod="coredns-668d6bf9bc-7psl5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7psl5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--7psl5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8456e800-6c4f-44dd-baf5-c2d42ab5dd0e", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 22, 58, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-7psl5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidfbd903f0db", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 22:59:06.144053 containerd[1608]: 2025-12-13 22:59:06.127 [INFO][4297] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="4ca21dc7f78a6915e69430027083fc1859a5af3b64cbd2bbb2264ca0d9183dcd" Namespace="kube-system" Pod="coredns-668d6bf9bc-7psl5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7psl5-eth0" Dec 13 22:59:06.144053 containerd[1608]: 2025-12-13 22:59:06.127 [INFO][4297] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidfbd903f0db ContainerID="4ca21dc7f78a6915e69430027083fc1859a5af3b64cbd2bbb2264ca0d9183dcd" Namespace="kube-system" Pod="coredns-668d6bf9bc-7psl5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7psl5-eth0" Dec 13 22:59:06.144053 containerd[1608]: 2025-12-13 22:59:06.130 [INFO][4297] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4ca21dc7f78a6915e69430027083fc1859a5af3b64cbd2bbb2264ca0d9183dcd" Namespace="kube-system" Pod="coredns-668d6bf9bc-7psl5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7psl5-eth0" Dec 13 22:59:06.144288 containerd[1608]: 2025-12-13 22:59:06.131 [INFO][4297] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4ca21dc7f78a6915e69430027083fc1859a5af3b64cbd2bbb2264ca0d9183dcd" Namespace="kube-system" Pod="coredns-668d6bf9bc-7psl5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7psl5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--7psl5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8456e800-6c4f-44dd-baf5-c2d42ab5dd0e", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 22, 58, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4ca21dc7f78a6915e69430027083fc1859a5af3b64cbd2bbb2264ca0d9183dcd", Pod:"coredns-668d6bf9bc-7psl5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidfbd903f0db", MAC:"de:c4:dd:46:6c:7d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 22:59:06.144288 containerd[1608]: 2025-12-13 22:59:06.140 [INFO][4297] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4ca21dc7f78a6915e69430027083fc1859a5af3b64cbd2bbb2264ca0d9183dcd" Namespace="kube-system" Pod="coredns-668d6bf9bc-7psl5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7psl5-eth0" Dec 13 22:59:06.159000 audit[4331]: NETFILTER_CFG table=filter:127 family=2 entries=42 op=nft_register_chain pid=4331 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 22:59:06.163986 kernel: kauditd_printk_skb: 242 callbacks suppressed Dec 13 22:59:06.164073 kernel: audit: type=1325 audit(1765666746.159:662): table=filter:127 family=2 entries=42 op=nft_register_chain pid=4331 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 22:59:06.164097 kernel: audit: type=1300 audit(1765666746.159:662): arch=c00000b7 syscall=211 success=yes exit=22552 a0=3 a1=ffffc4215990 a2=0 a3=ffff86c0dfa8 items=0 ppid=4052 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:06.159000 audit[4331]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=22552 a0=3 a1=ffffc4215990 a2=0 a3=ffff86c0dfa8 items=0 ppid=4052 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:06.167359 containerd[1608]: time="2025-12-13T22:59:06.167018477Z" level=info msg="connecting to shim 4ca21dc7f78a6915e69430027083fc1859a5af3b64cbd2bbb2264ca0d9183dcd" address="unix:///run/containerd/s/f6f15061b44704ff591ddaa50fc383403327b6625e837c5575edeebbe1415131" namespace=k8s.io protocol=ttrpc version=3 Dec 13 22:59:06.159000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 22:59:06.169352 kernel: audit: type=1327 audit(1765666746.159:662): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 22:59:06.199384 systemd[1]: Started cri-containerd-4ca21dc7f78a6915e69430027083fc1859a5af3b64cbd2bbb2264ca0d9183dcd.scope - libcontainer container 4ca21dc7f78a6915e69430027083fc1859a5af3b64cbd2bbb2264ca0d9183dcd. Dec 13 22:59:06.208000 audit: BPF prog-id=215 op=LOAD Dec 13 22:59:06.211575 kernel: audit: type=1334 audit(1765666746.208:663): prog-id=215 op=LOAD Dec 13 22:59:06.211619 kernel: audit: type=1334 audit(1765666746.209:664): prog-id=216 op=LOAD Dec 13 22:59:06.209000 audit: BPF prog-id=216 op=LOAD Dec 13 22:59:06.209000 audit[4351]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4341 pid=4351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:06.215598 kernel: audit: type=1300 audit(1765666746.209:664): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4341 pid=4351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:06.209000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463613231646337663738613639313565363934333030323730383366 Dec 13 22:59:06.219240 kernel: audit: type=1327 audit(1765666746.209:664): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463613231646337663738613639313565363934333030323730383366 Dec 13 22:59:06.210000 audit: BPF prog-id=216 op=UNLOAD Dec 13 22:59:06.219574 systemd-resolved[1249]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 22:59:06.220080 kernel: audit: type=1334 audit(1765666746.210:665): prog-id=216 op=UNLOAD Dec 13 22:59:06.220108 kernel: audit: type=1300 audit(1765666746.210:665): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4341 pid=4351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:06.210000 audit[4351]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4341 pid=4351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:06.210000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463613231646337663738613639313565363934333030323730383366 Dec 13 22:59:06.225722 kernel: audit: type=1327 audit(1765666746.210:665): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463613231646337663738613639313565363934333030323730383366 Dec 13 22:59:06.214000 audit: BPF prog-id=217 op=LOAD Dec 13 22:59:06.214000 audit[4351]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4341 pid=4351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:06.214000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463613231646337663738613639313565363934333030323730383366 Dec 13 22:59:06.214000 audit: BPF prog-id=218 op=LOAD Dec 13 22:59:06.214000 audit[4351]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4341 pid=4351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:06.214000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463613231646337663738613639313565363934333030323730383366 Dec 13 22:59:06.214000 audit: BPF prog-id=218 op=UNLOAD Dec 13 22:59:06.214000 audit[4351]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4341 pid=4351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:06.214000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463613231646337663738613639313565363934333030323730383366 Dec 13 22:59:06.214000 audit: BPF prog-id=217 op=UNLOAD Dec 13 22:59:06.214000 audit[4351]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4341 pid=4351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:06.214000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463613231646337663738613639313565363934333030323730383366 Dec 13 22:59:06.214000 audit: BPF prog-id=219 op=LOAD Dec 13 22:59:06.214000 audit[4351]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4341 pid=4351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:06.214000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463613231646337663738613639313565363934333030323730383366 Dec 13 22:59:06.243752 containerd[1608]: time="2025-12-13T22:59:06.243718799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7psl5,Uid:8456e800-6c4f-44dd-baf5-c2d42ab5dd0e,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ca21dc7f78a6915e69430027083fc1859a5af3b64cbd2bbb2264ca0d9183dcd\"" Dec 13 22:59:06.244813 kubelet[2752]: E1213 22:59:06.244787 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:59:06.249591 containerd[1608]: time="2025-12-13T22:59:06.249527510Z" level=info msg="CreateContainer within sandbox \"4ca21dc7f78a6915e69430027083fc1859a5af3b64cbd2bbb2264ca0d9183dcd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 22:59:06.258025 containerd[1608]: time="2025-12-13T22:59:06.257990697Z" level=info msg="Container e270df74b11fe520256a1b954cac59743fbf6edee382dc325522659df9390c4f: CDI devices from CRI Config.CDIDevices: []" Dec 13 22:59:06.265448 containerd[1608]: time="2025-12-13T22:59:06.265393126Z" level=info msg="CreateContainer within sandbox \"4ca21dc7f78a6915e69430027083fc1859a5af3b64cbd2bbb2264ca0d9183dcd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e270df74b11fe520256a1b954cac59743fbf6edee382dc325522659df9390c4f\"" Dec 13 22:59:06.266088 containerd[1608]: time="2025-12-13T22:59:06.266048765Z" level=info msg="StartContainer for \"e270df74b11fe520256a1b954cac59743fbf6edee382dc325522659df9390c4f\"" Dec 13 22:59:06.267253 containerd[1608]: time="2025-12-13T22:59:06.267209563Z" level=info msg="connecting to shim e270df74b11fe520256a1b954cac59743fbf6edee382dc325522659df9390c4f" address="unix:///run/containerd/s/f6f15061b44704ff591ddaa50fc383403327b6625e837c5575edeebbe1415131" protocol=ttrpc version=3 Dec 13 22:59:06.296759 systemd[1]: Started cri-containerd-e270df74b11fe520256a1b954cac59743fbf6edee382dc325522659df9390c4f.scope - libcontainer container e270df74b11fe520256a1b954cac59743fbf6edee382dc325522659df9390c4f. Dec 13 22:59:06.306000 audit: BPF prog-id=220 op=LOAD Dec 13 22:59:06.307000 audit: BPF prog-id=221 op=LOAD Dec 13 22:59:06.307000 audit[4377]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4341 pid=4377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:06.307000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532373064663734623131666535323032353661316239353463616335 Dec 13 22:59:06.307000 audit: BPF prog-id=221 op=UNLOAD Dec 13 22:59:06.307000 audit[4377]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4341 pid=4377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:06.307000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532373064663734623131666535323032353661316239353463616335 Dec 13 22:59:06.307000 audit: BPF prog-id=222 op=LOAD Dec 13 22:59:06.307000 audit[4377]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4341 pid=4377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:06.307000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532373064663734623131666535323032353661316239353463616335 Dec 13 22:59:06.307000 audit: BPF prog-id=223 op=LOAD Dec 13 22:59:06.307000 audit[4377]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4341 pid=4377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:06.307000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532373064663734623131666535323032353661316239353463616335 Dec 13 22:59:06.307000 audit: BPF prog-id=223 op=UNLOAD Dec 13 22:59:06.307000 audit[4377]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4341 pid=4377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:06.307000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532373064663734623131666535323032353661316239353463616335 Dec 13 22:59:06.307000 audit: BPF prog-id=222 op=UNLOAD Dec 13 22:59:06.307000 audit[4377]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4341 pid=4377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:06.307000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532373064663734623131666535323032353661316239353463616335 Dec 13 22:59:06.307000 audit: BPF prog-id=224 op=LOAD Dec 13 22:59:06.307000 audit[4377]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4341 pid=4377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:06.307000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532373064663734623131666535323032353661316239353463616335 Dec 13 22:59:06.323813 containerd[1608]: time="2025-12-13T22:59:06.323753157Z" level=info msg="StartContainer for \"e270df74b11fe520256a1b954cac59743fbf6edee382dc325522659df9390c4f\" returns successfully" Dec 13 22:59:06.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.10:22-10.0.0.1:59414 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:59:06.882820 systemd[1]: Started sshd@8-10.0.0.10:22-10.0.0.1:59414.service - OpenSSH per-connection server daemon (10.0.0.1:59414). Dec 13 22:59:06.957000 audit[4414]: USER_ACCT pid=4414 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:06.959536 sshd[4414]: Accepted publickey for core from 10.0.0.1 port 59414 ssh2: RSA SHA256:wrASvn4TPBLeGSBdJR0bjeHJhgtBBrNwNgMNeW/n+/Q Dec 13 22:59:06.959000 audit[4414]: CRED_ACQ pid=4414 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:06.959000 audit[4414]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd1858670 a2=3 a3=0 items=0 ppid=1 pid=4414 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:06.959000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 22:59:06.961614 sshd-session[4414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 22:59:06.967718 systemd-logind[1585]: New session 10 of user core. Dec 13 22:59:06.975760 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 22:59:06.977000 audit[4414]: USER_START pid=4414 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:06.981000 audit[4418]: CRED_ACQ pid=4418 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:07.023168 containerd[1608]: time="2025-12-13T22:59:07.023127408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ddf58565-58qbn,Uid:bc906328-edba-4f47-8190-6385bd5de6a4,Namespace:calico-apiserver,Attempt:0,}" Dec 13 22:59:07.156338 systemd-networkd[1290]: calid534370cbb2: Link UP Dec 13 22:59:07.157036 systemd-networkd[1290]: calid534370cbb2: Gained carrier Dec 13 22:59:07.165361 sshd[4418]: Connection closed by 10.0.0.1 port 59414 Dec 13 22:59:07.165827 sshd-session[4414]: pam_unix(sshd:session): session closed for user core Dec 13 22:59:07.167000 audit[4414]: USER_END pid=4414 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:07.167000 audit[4414]: CRED_DISP pid=4414 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:07.172572 kubelet[2752]: E1213 22:59:07.172444 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:59:07.174504 systemd[1]: sshd@8-10.0.0.10:22-10.0.0.1:59414.service: Deactivated successfully. Dec 13 22:59:07.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.10:22-10.0.0.1:59414 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:59:07.178147 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 22:59:07.181789 containerd[1608]: 2025-12-13 22:59:07.069 [INFO][4429] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6ddf58565--58qbn-eth0 calico-apiserver-6ddf58565- calico-apiserver bc906328-edba-4f47-8190-6385bd5de6a4 845 0 2025-12-13 22:58:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6ddf58565 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6ddf58565-58qbn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid534370cbb2 [] [] }} ContainerID="a42db556d98f0e1d891196107bb70ff68fc41ec59075616c7c87851799fc846f" Namespace="calico-apiserver" Pod="calico-apiserver-6ddf58565-58qbn" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ddf58565--58qbn-" Dec 13 22:59:07.181789 containerd[1608]: 2025-12-13 22:59:07.069 [INFO][4429] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a42db556d98f0e1d891196107bb70ff68fc41ec59075616c7c87851799fc846f" Namespace="calico-apiserver" Pod="calico-apiserver-6ddf58565-58qbn" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ddf58565--58qbn-eth0" Dec 13 22:59:07.181789 containerd[1608]: 2025-12-13 22:59:07.097 [INFO][4442] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a42db556d98f0e1d891196107bb70ff68fc41ec59075616c7c87851799fc846f" HandleID="k8s-pod-network.a42db556d98f0e1d891196107bb70ff68fc41ec59075616c7c87851799fc846f" Workload="localhost-k8s-calico--apiserver--6ddf58565--58qbn-eth0" Dec 13 22:59:07.181789 containerd[1608]: 2025-12-13 22:59:07.097 [INFO][4442] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a42db556d98f0e1d891196107bb70ff68fc41ec59075616c7c87851799fc846f" HandleID="k8s-pod-network.a42db556d98f0e1d891196107bb70ff68fc41ec59075616c7c87851799fc846f" Workload="localhost-k8s-calico--apiserver--6ddf58565--58qbn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001364a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6ddf58565-58qbn", "timestamp":"2025-12-13 22:59:07.097125542 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 22:59:07.181789 containerd[1608]: 2025-12-13 22:59:07.097 [INFO][4442] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 22:59:07.181789 containerd[1608]: 2025-12-13 22:59:07.097 [INFO][4442] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 22:59:07.181789 containerd[1608]: 2025-12-13 22:59:07.097 [INFO][4442] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 22:59:07.181789 containerd[1608]: 2025-12-13 22:59:07.107 [INFO][4442] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a42db556d98f0e1d891196107bb70ff68fc41ec59075616c7c87851799fc846f" host="localhost" Dec 13 22:59:07.181789 containerd[1608]: 2025-12-13 22:59:07.112 [INFO][4442] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 22:59:07.181789 containerd[1608]: 2025-12-13 22:59:07.117 [INFO][4442] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 22:59:07.181789 containerd[1608]: 2025-12-13 22:59:07.119 [INFO][4442] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 22:59:07.181789 containerd[1608]: 2025-12-13 22:59:07.122 [INFO][4442] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 22:59:07.181789 containerd[1608]: 2025-12-13 22:59:07.122 [INFO][4442] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a42db556d98f0e1d891196107bb70ff68fc41ec59075616c7c87851799fc846f" host="localhost" Dec 13 22:59:07.181789 containerd[1608]: 2025-12-13 22:59:07.124 [INFO][4442] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a42db556d98f0e1d891196107bb70ff68fc41ec59075616c7c87851799fc846f Dec 13 22:59:07.181789 containerd[1608]: 2025-12-13 22:59:07.137 [INFO][4442] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a42db556d98f0e1d891196107bb70ff68fc41ec59075616c7c87851799fc846f" host="localhost" Dec 13 22:59:07.181789 containerd[1608]: 2025-12-13 22:59:07.149 [INFO][4442] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.a42db556d98f0e1d891196107bb70ff68fc41ec59075616c7c87851799fc846f" host="localhost" Dec 13 22:59:07.181789 containerd[1608]: 2025-12-13 22:59:07.149 [INFO][4442] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.a42db556d98f0e1d891196107bb70ff68fc41ec59075616c7c87851799fc846f" host="localhost" Dec 13 22:59:07.181789 containerd[1608]: 2025-12-13 22:59:07.149 [INFO][4442] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 22:59:07.181789 containerd[1608]: 2025-12-13 22:59:07.149 [INFO][4442] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="a42db556d98f0e1d891196107bb70ff68fc41ec59075616c7c87851799fc846f" HandleID="k8s-pod-network.a42db556d98f0e1d891196107bb70ff68fc41ec59075616c7c87851799fc846f" Workload="localhost-k8s-calico--apiserver--6ddf58565--58qbn-eth0" Dec 13 22:59:07.182347 containerd[1608]: 2025-12-13 22:59:07.152 [INFO][4429] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a42db556d98f0e1d891196107bb70ff68fc41ec59075616c7c87851799fc846f" Namespace="calico-apiserver" Pod="calico-apiserver-6ddf58565-58qbn" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ddf58565--58qbn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6ddf58565--58qbn-eth0", GenerateName:"calico-apiserver-6ddf58565-", Namespace:"calico-apiserver", SelfLink:"", UID:"bc906328-edba-4f47-8190-6385bd5de6a4", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 22, 58, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ddf58565", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6ddf58565-58qbn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid534370cbb2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 22:59:07.182347 containerd[1608]: 2025-12-13 22:59:07.152 [INFO][4429] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="a42db556d98f0e1d891196107bb70ff68fc41ec59075616c7c87851799fc846f" Namespace="calico-apiserver" Pod="calico-apiserver-6ddf58565-58qbn" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ddf58565--58qbn-eth0" Dec 13 22:59:07.182347 containerd[1608]: 2025-12-13 22:59:07.152 [INFO][4429] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid534370cbb2 ContainerID="a42db556d98f0e1d891196107bb70ff68fc41ec59075616c7c87851799fc846f" Namespace="calico-apiserver" Pod="calico-apiserver-6ddf58565-58qbn" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ddf58565--58qbn-eth0" Dec 13 22:59:07.182347 containerd[1608]: 2025-12-13 22:59:07.157 [INFO][4429] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a42db556d98f0e1d891196107bb70ff68fc41ec59075616c7c87851799fc846f" Namespace="calico-apiserver" Pod="calico-apiserver-6ddf58565-58qbn" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ddf58565--58qbn-eth0" Dec 13 22:59:07.182347 containerd[1608]: 2025-12-13 22:59:07.157 [INFO][4429] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a42db556d98f0e1d891196107bb70ff68fc41ec59075616c7c87851799fc846f" Namespace="calico-apiserver" Pod="calico-apiserver-6ddf58565-58qbn" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ddf58565--58qbn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6ddf58565--58qbn-eth0", GenerateName:"calico-apiserver-6ddf58565-", Namespace:"calico-apiserver", SelfLink:"", UID:"bc906328-edba-4f47-8190-6385bd5de6a4", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 22, 58, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ddf58565", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a42db556d98f0e1d891196107bb70ff68fc41ec59075616c7c87851799fc846f", Pod:"calico-apiserver-6ddf58565-58qbn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid534370cbb2", MAC:"a2:20:c6:18:91:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 22:59:07.182347 containerd[1608]: 2025-12-13 22:59:07.173 [INFO][4429] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a42db556d98f0e1d891196107bb70ff68fc41ec59075616c7c87851799fc846f" Namespace="calico-apiserver" Pod="calico-apiserver-6ddf58565-58qbn" WorkloadEndpoint="localhost-k8s-calico--apiserver--6ddf58565--58qbn-eth0" Dec 13 22:59:07.182827 systemd-logind[1585]: Session 10 logged out. Waiting for processes to exit. Dec 13 22:59:07.185068 systemd-logind[1585]: Removed session 10. Dec 13 22:59:07.190935 kubelet[2752]: I1213 22:59:07.190882 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7psl5" podStartSLOduration=38.190863968 podStartE2EDuration="38.190863968s" podCreationTimestamp="2025-12-13 22:58:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:59:07.18953257 +0000 UTC m=+44.303293014" watchObservedRunningTime="2025-12-13 22:59:07.190863968 +0000 UTC m=+44.304624372" Dec 13 22:59:07.201000 audit[4465]: NETFILTER_CFG table=filter:128 family=2 entries=60 op=nft_register_chain pid=4465 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 22:59:07.201000 audit[4465]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=32248 a0=3 a1=ffffce1c72f0 a2=0 a3=ffffbe4b7fa8 items=0 ppid=4052 pid=4465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:07.201000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 22:59:07.213000 audit[4471]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=4471 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:59:07.213000 audit[4471]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd5e97670 a2=0 a3=1 items=0 ppid=2863 pid=4471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:07.213000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:59:07.221267 containerd[1608]: time="2025-12-13T22:59:07.221216964Z" level=info msg="connecting to shim a42db556d98f0e1d891196107bb70ff68fc41ec59075616c7c87851799fc846f" address="unix:///run/containerd/s/b7ecbf9cf3266731384f9beceef54f220d63d6627b95135acaeb84aa5f5ea83a" namespace=k8s.io protocol=ttrpc version=3 Dec 13 22:59:07.221000 audit[4471]: NETFILTER_CFG table=nat:130 family=2 entries=14 op=nft_register_rule pid=4471 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:59:07.221000 audit[4471]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffd5e97670 a2=0 a3=1 items=0 ppid=2863 pid=4471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:07.221000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:59:07.247000 audit[4501]: NETFILTER_CFG table=filter:131 family=2 entries=17 op=nft_register_rule pid=4501 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:59:07.247000 audit[4501]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff7b85180 a2=0 a3=1 items=0 ppid=2863 pid=4501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:07.247000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:59:07.252384 systemd[1]: Started cri-containerd-a42db556d98f0e1d891196107bb70ff68fc41ec59075616c7c87851799fc846f.scope - libcontainer container a42db556d98f0e1d891196107bb70ff68fc41ec59075616c7c87851799fc846f. Dec 13 22:59:07.252000 audit[4501]: NETFILTER_CFG table=nat:132 family=2 entries=35 op=nft_register_chain pid=4501 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:59:07.252000 audit[4501]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=fffff7b85180 a2=0 a3=1 items=0 ppid=2863 pid=4501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:07.252000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:59:07.269000 audit: BPF prog-id=225 op=LOAD Dec 13 22:59:07.269000 audit: BPF prog-id=226 op=LOAD Dec 13 22:59:07.269000 audit[4488]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=4477 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:07.269000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134326462353536643938663065316438393131393631303762623730 Dec 13 22:59:07.270000 audit: BPF prog-id=226 op=UNLOAD Dec 13 22:59:07.270000 audit[4488]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4477 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:07.270000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134326462353536643938663065316438393131393631303762623730 Dec 13 22:59:07.270000 audit: BPF prog-id=227 op=LOAD Dec 13 22:59:07.270000 audit[4488]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=4477 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:07.270000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134326462353536643938663065316438393131393631303762623730 Dec 13 22:59:07.270000 audit: BPF prog-id=228 op=LOAD Dec 13 22:59:07.270000 audit[4488]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=4477 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:07.270000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134326462353536643938663065316438393131393631303762623730 Dec 13 22:59:07.270000 audit: BPF prog-id=228 op=UNLOAD Dec 13 22:59:07.270000 audit[4488]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4477 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:07.270000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134326462353536643938663065316438393131393631303762623730 Dec 13 22:59:07.271000 audit: BPF prog-id=227 op=UNLOAD Dec 13 22:59:07.271000 audit[4488]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4477 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:07.271000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134326462353536643938663065316438393131393631303762623730 Dec 13 22:59:07.271000 audit: BPF prog-id=229 op=LOAD Dec 13 22:59:07.271000 audit[4488]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=4477 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:07.271000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134326462353536643938663065316438393131393631303762623730 Dec 13 22:59:07.273686 systemd-resolved[1249]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 22:59:07.299376 containerd[1608]: time="2025-12-13T22:59:07.299334412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ddf58565-58qbn,Uid:bc906328-edba-4f47-8190-6385bd5de6a4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a42db556d98f0e1d891196107bb70ff68fc41ec59075616c7c87851799fc846f\"" Dec 13 22:59:07.301219 containerd[1608]: time="2025-12-13T22:59:07.301181490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 13 22:59:07.510569 containerd[1608]: time="2025-12-13T22:59:07.509451631Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 22:59:07.511182 containerd[1608]: time="2025-12-13T22:59:07.511078668Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 13 22:59:07.511182 containerd[1608]: time="2025-12-13T22:59:07.511122868Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 13 22:59:07.511627 kubelet[2752]: E1213 22:59:07.511406 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 22:59:07.511627 kubelet[2752]: E1213 22:59:07.511475 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 22:59:07.511982 kubelet[2752]: E1213 22:59:07.511763 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5wlgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6ddf58565-58qbn_calico-apiserver(bc906328-edba-4f47-8190-6385bd5de6a4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 13 22:59:07.513293 kubelet[2752]: E1213 22:59:07.513243 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ddf58565-58qbn" podUID="bc906328-edba-4f47-8190-6385bd5de6a4" Dec 13 22:59:07.657722 systemd-networkd[1290]: calidfbd903f0db: Gained IPv6LL Dec 13 22:59:08.023449 containerd[1608]: time="2025-12-13T22:59:08.023105336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d74677ddd-xdvvs,Uid:ef0b8707-0b6c-4822-b958-0aa6bda67c50,Namespace:calico-apiserver,Attempt:0,}" Dec 13 22:59:08.023449 containerd[1608]: time="2025-12-13T22:59:08.023105416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d74677ddd-w5hbt,Uid:35b16e82-6b93-4598-9c60-bbc71d0b419a,Namespace:calico-apiserver,Attempt:0,}" Dec 13 22:59:08.174837 kubelet[2752]: E1213 22:59:08.174033 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:59:08.178043 kubelet[2752]: E1213 22:59:08.177988 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ddf58565-58qbn" podUID="bc906328-edba-4f47-8190-6385bd5de6a4" Dec 13 22:59:08.181002 systemd-networkd[1290]: cali468d7476965: Link UP Dec 13 22:59:08.181665 systemd-networkd[1290]: cali468d7476965: Gained carrier Dec 13 22:59:08.203432 containerd[1608]: 2025-12-13 22:59:08.085 [INFO][4529] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7d74677ddd--w5hbt-eth0 calico-apiserver-7d74677ddd- calico-apiserver 35b16e82-6b93-4598-9c60-bbc71d0b419a 842 0 2025-12-13 22:58:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d74677ddd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7d74677ddd-w5hbt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali468d7476965 [] [] }} ContainerID="2cac9407a6cc6b324d363b64316ed599717d975428fd593882e54638b4758b16" Namespace="calico-apiserver" Pod="calico-apiserver-7d74677ddd-w5hbt" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d74677ddd--w5hbt-" Dec 13 22:59:08.203432 containerd[1608]: 2025-12-13 22:59:08.085 [INFO][4529] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2cac9407a6cc6b324d363b64316ed599717d975428fd593882e54638b4758b16" Namespace="calico-apiserver" Pod="calico-apiserver-7d74677ddd-w5hbt" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d74677ddd--w5hbt-eth0" Dec 13 22:59:08.203432 containerd[1608]: 2025-12-13 22:59:08.120 [INFO][4549] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2cac9407a6cc6b324d363b64316ed599717d975428fd593882e54638b4758b16" HandleID="k8s-pod-network.2cac9407a6cc6b324d363b64316ed599717d975428fd593882e54638b4758b16" Workload="localhost-k8s-calico--apiserver--7d74677ddd--w5hbt-eth0" Dec 13 22:59:08.203432 containerd[1608]: 2025-12-13 22:59:08.121 [INFO][4549] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2cac9407a6cc6b324d363b64316ed599717d975428fd593882e54638b4758b16" HandleID="k8s-pod-network.2cac9407a6cc6b324d363b64316ed599717d975428fd593882e54638b4758b16" Workload="localhost-k8s-calico--apiserver--7d74677ddd--w5hbt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001376c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7d74677ddd-w5hbt", "timestamp":"2025-12-13 22:59:08.120949364 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 22:59:08.203432 containerd[1608]: 2025-12-13 22:59:08.121 [INFO][4549] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 22:59:08.203432 containerd[1608]: 2025-12-13 22:59:08.121 [INFO][4549] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 22:59:08.203432 containerd[1608]: 2025-12-13 22:59:08.121 [INFO][4549] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 22:59:08.203432 containerd[1608]: 2025-12-13 22:59:08.132 [INFO][4549] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2cac9407a6cc6b324d363b64316ed599717d975428fd593882e54638b4758b16" host="localhost" Dec 13 22:59:08.203432 containerd[1608]: 2025-12-13 22:59:08.138 [INFO][4549] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 22:59:08.203432 containerd[1608]: 2025-12-13 22:59:08.144 [INFO][4549] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 22:59:08.203432 containerd[1608]: 2025-12-13 22:59:08.146 [INFO][4549] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 22:59:08.203432 containerd[1608]: 2025-12-13 22:59:08.149 [INFO][4549] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 22:59:08.203432 containerd[1608]: 2025-12-13 22:59:08.149 [INFO][4549] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2cac9407a6cc6b324d363b64316ed599717d975428fd593882e54638b4758b16" host="localhost" Dec 13 22:59:08.203432 containerd[1608]: 2025-12-13 22:59:08.152 [INFO][4549] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2cac9407a6cc6b324d363b64316ed599717d975428fd593882e54638b4758b16 Dec 13 22:59:08.203432 containerd[1608]: 2025-12-13 22:59:08.159 [INFO][4549] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2cac9407a6cc6b324d363b64316ed599717d975428fd593882e54638b4758b16" host="localhost" Dec 13 22:59:08.203432 containerd[1608]: 2025-12-13 22:59:08.168 [INFO][4549] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.2cac9407a6cc6b324d363b64316ed599717d975428fd593882e54638b4758b16" host="localhost" Dec 13 22:59:08.203432 containerd[1608]: 2025-12-13 22:59:08.168 [INFO][4549] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.2cac9407a6cc6b324d363b64316ed599717d975428fd593882e54638b4758b16" host="localhost" Dec 13 22:59:08.203432 containerd[1608]: 2025-12-13 22:59:08.168 [INFO][4549] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 22:59:08.203432 containerd[1608]: 2025-12-13 22:59:08.168 [INFO][4549] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="2cac9407a6cc6b324d363b64316ed599717d975428fd593882e54638b4758b16" HandleID="k8s-pod-network.2cac9407a6cc6b324d363b64316ed599717d975428fd593882e54638b4758b16" Workload="localhost-k8s-calico--apiserver--7d74677ddd--w5hbt-eth0" Dec 13 22:59:08.204056 containerd[1608]: 2025-12-13 22:59:08.176 [INFO][4529] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2cac9407a6cc6b324d363b64316ed599717d975428fd593882e54638b4758b16" Namespace="calico-apiserver" Pod="calico-apiserver-7d74677ddd-w5hbt" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d74677ddd--w5hbt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d74677ddd--w5hbt-eth0", GenerateName:"calico-apiserver-7d74677ddd-", Namespace:"calico-apiserver", SelfLink:"", UID:"35b16e82-6b93-4598-9c60-bbc71d0b419a", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 22, 58, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d74677ddd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7d74677ddd-w5hbt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali468d7476965", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 22:59:08.204056 containerd[1608]: 2025-12-13 22:59:08.177 [INFO][4529] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="2cac9407a6cc6b324d363b64316ed599717d975428fd593882e54638b4758b16" Namespace="calico-apiserver" Pod="calico-apiserver-7d74677ddd-w5hbt" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d74677ddd--w5hbt-eth0" Dec 13 22:59:08.204056 containerd[1608]: 2025-12-13 22:59:08.177 [INFO][4529] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali468d7476965 ContainerID="2cac9407a6cc6b324d363b64316ed599717d975428fd593882e54638b4758b16" Namespace="calico-apiserver" Pod="calico-apiserver-7d74677ddd-w5hbt" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d74677ddd--w5hbt-eth0" Dec 13 22:59:08.204056 containerd[1608]: 2025-12-13 22:59:08.182 [INFO][4529] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2cac9407a6cc6b324d363b64316ed599717d975428fd593882e54638b4758b16" Namespace="calico-apiserver" Pod="calico-apiserver-7d74677ddd-w5hbt" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d74677ddd--w5hbt-eth0" Dec 13 22:59:08.204056 containerd[1608]: 2025-12-13 22:59:08.184 [INFO][4529] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2cac9407a6cc6b324d363b64316ed599717d975428fd593882e54638b4758b16" Namespace="calico-apiserver" Pod="calico-apiserver-7d74677ddd-w5hbt" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d74677ddd--w5hbt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d74677ddd--w5hbt-eth0", GenerateName:"calico-apiserver-7d74677ddd-", Namespace:"calico-apiserver", SelfLink:"", UID:"35b16e82-6b93-4598-9c60-bbc71d0b419a", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 22, 58, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d74677ddd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2cac9407a6cc6b324d363b64316ed599717d975428fd593882e54638b4758b16", Pod:"calico-apiserver-7d74677ddd-w5hbt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali468d7476965", MAC:"9e:17:8a:57:a9:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 22:59:08.204056 containerd[1608]: 2025-12-13 22:59:08.197 [INFO][4529] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2cac9407a6cc6b324d363b64316ed599717d975428fd593882e54638b4758b16" Namespace="calico-apiserver" Pod="calico-apiserver-7d74677ddd-w5hbt" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d74677ddd--w5hbt-eth0" Dec 13 22:59:08.207000 audit[4578]: NETFILTER_CFG table=filter:133 family=2 entries=14 op=nft_register_rule pid=4578 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:59:08.207000 audit[4578]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffca6de7a0 a2=0 a3=1 items=0 ppid=2863 pid=4578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:08.207000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:59:08.216000 audit[4578]: NETFILTER_CFG table=nat:134 family=2 entries=20 op=nft_register_rule pid=4578 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:59:08.216000 audit[4578]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffca6de7a0 a2=0 a3=1 items=0 ppid=2863 pid=4578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:08.216000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:59:08.233730 containerd[1608]: time="2025-12-13T22:59:08.233639533Z" level=info msg="connecting to shim 2cac9407a6cc6b324d363b64316ed599717d975428fd593882e54638b4758b16" address="unix:///run/containerd/s/491e19ac0bbe585debfb9ddd0ca8618d46dd734b53868807301f2c6ef4c6243c" namespace=k8s.io protocol=ttrpc version=3 Dec 13 22:59:08.232000 audit[4585]: NETFILTER_CFG table=filter:135 family=2 entries=41 op=nft_register_chain pid=4585 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 22:59:08.232000 audit[4585]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23060 a0=3 a1=ffffd384f3d0 a2=0 a3=ffff96175fa8 items=0 ppid=4052 pid=4585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:08.232000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 22:59:08.266779 systemd[1]: Started cri-containerd-2cac9407a6cc6b324d363b64316ed599717d975428fd593882e54638b4758b16.scope - libcontainer container 2cac9407a6cc6b324d363b64316ed599717d975428fd593882e54638b4758b16. Dec 13 22:59:08.277126 systemd-networkd[1290]: caliddb8928886f: Link UP Dec 13 22:59:08.277622 systemd-networkd[1290]: caliddb8928886f: Gained carrier Dec 13 22:59:08.290000 audit: BPF prog-id=230 op=LOAD Dec 13 22:59:08.291000 audit: BPF prog-id=231 op=LOAD Dec 13 22:59:08.291000 audit[4599]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=4588 pid=4599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:08.291000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263616339343037613663633662333234643336336236343331366564 Dec 13 22:59:08.292000 audit: BPF prog-id=231 op=UNLOAD Dec 13 22:59:08.292000 audit[4599]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4588 pid=4599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:08.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263616339343037613663633662333234643336336236343331366564 Dec 13 22:59:08.292000 audit: BPF prog-id=232 op=LOAD Dec 13 22:59:08.292000 audit[4599]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=4588 pid=4599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:08.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263616339343037613663633662333234643336336236343331366564 Dec 13 22:59:08.292000 audit: BPF prog-id=233 op=LOAD Dec 13 22:59:08.292000 audit[4599]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=4588 pid=4599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:08.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263616339343037613663633662333234643336336236343331366564 Dec 13 22:59:08.292000 audit: BPF prog-id=233 op=UNLOAD Dec 13 22:59:08.292000 audit[4599]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4588 pid=4599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:08.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263616339343037613663633662333234643336336236343331366564 Dec 13 22:59:08.292000 audit: BPF prog-id=232 op=UNLOAD Dec 13 22:59:08.292000 audit[4599]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4588 pid=4599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:08.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263616339343037613663633662333234643336336236343331366564 Dec 13 22:59:08.292000 audit: BPF prog-id=234 op=LOAD Dec 13 22:59:08.292000 audit[4599]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=4588 pid=4599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:08.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263616339343037613663633662333234643336336236343331366564 Dec 13 22:59:08.295006 systemd-resolved[1249]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 22:59:08.295714 containerd[1608]: 2025-12-13 22:59:08.090 [INFO][4522] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7d74677ddd--xdvvs-eth0 calico-apiserver-7d74677ddd- calico-apiserver ef0b8707-0b6c-4822-b958-0aa6bda67c50 846 0 2025-12-13 22:58:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d74677ddd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7d74677ddd-xdvvs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliddb8928886f [] [] }} ContainerID="85eb39df12d26284d819f33a786c938d4ce5df83a68e8c78d6985a4a00d15a7c" Namespace="calico-apiserver" Pod="calico-apiserver-7d74677ddd-xdvvs" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d74677ddd--xdvvs-" Dec 13 22:59:08.295714 containerd[1608]: 2025-12-13 22:59:08.090 [INFO][4522] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="85eb39df12d26284d819f33a786c938d4ce5df83a68e8c78d6985a4a00d15a7c" Namespace="calico-apiserver" Pod="calico-apiserver-7d74677ddd-xdvvs" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d74677ddd--xdvvs-eth0" Dec 13 22:59:08.295714 containerd[1608]: 2025-12-13 22:59:08.123 [INFO][4555] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="85eb39df12d26284d819f33a786c938d4ce5df83a68e8c78d6985a4a00d15a7c" HandleID="k8s-pod-network.85eb39df12d26284d819f33a786c938d4ce5df83a68e8c78d6985a4a00d15a7c" Workload="localhost-k8s-calico--apiserver--7d74677ddd--xdvvs-eth0" Dec 13 22:59:08.295714 containerd[1608]: 2025-12-13 22:59:08.123 [INFO][4555] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="85eb39df12d26284d819f33a786c938d4ce5df83a68e8c78d6985a4a00d15a7c" HandleID="k8s-pod-network.85eb39df12d26284d819f33a786c938d4ce5df83a68e8c78d6985a4a00d15a7c" Workload="localhost-k8s-calico--apiserver--7d74677ddd--xdvvs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400059ea60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7d74677ddd-xdvvs", "timestamp":"2025-12-13 22:59:08.123359561 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 22:59:08.295714 containerd[1608]: 2025-12-13 22:59:08.123 [INFO][4555] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 22:59:08.295714 containerd[1608]: 2025-12-13 22:59:08.168 [INFO][4555] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 22:59:08.295714 containerd[1608]: 2025-12-13 22:59:08.169 [INFO][4555] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 22:59:08.295714 containerd[1608]: 2025-12-13 22:59:08.234 [INFO][4555] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.85eb39df12d26284d819f33a786c938d4ce5df83a68e8c78d6985a4a00d15a7c" host="localhost" Dec 13 22:59:08.295714 containerd[1608]: 2025-12-13 22:59:08.242 [INFO][4555] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 22:59:08.295714 containerd[1608]: 2025-12-13 22:59:08.249 [INFO][4555] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 22:59:08.295714 containerd[1608]: 2025-12-13 22:59:08.252 [INFO][4555] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 22:59:08.295714 containerd[1608]: 2025-12-13 22:59:08.255 [INFO][4555] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 22:59:08.295714 containerd[1608]: 2025-12-13 22:59:08.256 [INFO][4555] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.85eb39df12d26284d819f33a786c938d4ce5df83a68e8c78d6985a4a00d15a7c" host="localhost" Dec 13 22:59:08.295714 containerd[1608]: 2025-12-13 22:59:08.258 [INFO][4555] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.85eb39df12d26284d819f33a786c938d4ce5df83a68e8c78d6985a4a00d15a7c Dec 13 22:59:08.295714 containerd[1608]: 2025-12-13 22:59:08.262 [INFO][4555] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.85eb39df12d26284d819f33a786c938d4ce5df83a68e8c78d6985a4a00d15a7c" host="localhost" Dec 13 22:59:08.295714 containerd[1608]: 2025-12-13 22:59:08.271 [INFO][4555] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.85eb39df12d26284d819f33a786c938d4ce5df83a68e8c78d6985a4a00d15a7c" host="localhost" Dec 13 22:59:08.295714 containerd[1608]: 2025-12-13 22:59:08.271 [INFO][4555] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.85eb39df12d26284d819f33a786c938d4ce5df83a68e8c78d6985a4a00d15a7c" host="localhost" Dec 13 22:59:08.295714 containerd[1608]: 2025-12-13 22:59:08.271 [INFO][4555] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 22:59:08.295714 containerd[1608]: 2025-12-13 22:59:08.271 [INFO][4555] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="85eb39df12d26284d819f33a786c938d4ce5df83a68e8c78d6985a4a00d15a7c" HandleID="k8s-pod-network.85eb39df12d26284d819f33a786c938d4ce5df83a68e8c78d6985a4a00d15a7c" Workload="localhost-k8s-calico--apiserver--7d74677ddd--xdvvs-eth0" Dec 13 22:59:08.296223 containerd[1608]: 2025-12-13 22:59:08.274 [INFO][4522] cni-plugin/k8s.go 418: Populated endpoint ContainerID="85eb39df12d26284d819f33a786c938d4ce5df83a68e8c78d6985a4a00d15a7c" Namespace="calico-apiserver" Pod="calico-apiserver-7d74677ddd-xdvvs" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d74677ddd--xdvvs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d74677ddd--xdvvs-eth0", GenerateName:"calico-apiserver-7d74677ddd-", Namespace:"calico-apiserver", SelfLink:"", UID:"ef0b8707-0b6c-4822-b958-0aa6bda67c50", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 22, 58, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d74677ddd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7d74677ddd-xdvvs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliddb8928886f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 22:59:08.296223 containerd[1608]: 2025-12-13 22:59:08.274 [INFO][4522] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="85eb39df12d26284d819f33a786c938d4ce5df83a68e8c78d6985a4a00d15a7c" Namespace="calico-apiserver" Pod="calico-apiserver-7d74677ddd-xdvvs" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d74677ddd--xdvvs-eth0" Dec 13 22:59:08.296223 containerd[1608]: 2025-12-13 22:59:08.274 [INFO][4522] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliddb8928886f ContainerID="85eb39df12d26284d819f33a786c938d4ce5df83a68e8c78d6985a4a00d15a7c" Namespace="calico-apiserver" Pod="calico-apiserver-7d74677ddd-xdvvs" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d74677ddd--xdvvs-eth0" Dec 13 22:59:08.296223 containerd[1608]: 2025-12-13 22:59:08.278 [INFO][4522] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="85eb39df12d26284d819f33a786c938d4ce5df83a68e8c78d6985a4a00d15a7c" Namespace="calico-apiserver" Pod="calico-apiserver-7d74677ddd-xdvvs" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d74677ddd--xdvvs-eth0" Dec 13 22:59:08.296223 containerd[1608]: 2025-12-13 22:59:08.279 [INFO][4522] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="85eb39df12d26284d819f33a786c938d4ce5df83a68e8c78d6985a4a00d15a7c" Namespace="calico-apiserver" Pod="calico-apiserver-7d74677ddd-xdvvs" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d74677ddd--xdvvs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7d74677ddd--xdvvs-eth0", GenerateName:"calico-apiserver-7d74677ddd-", Namespace:"calico-apiserver", SelfLink:"", UID:"ef0b8707-0b6c-4822-b958-0aa6bda67c50", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 22, 58, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d74677ddd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"85eb39df12d26284d819f33a786c938d4ce5df83a68e8c78d6985a4a00d15a7c", Pod:"calico-apiserver-7d74677ddd-xdvvs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliddb8928886f", MAC:"1e:8b:d1:99:13:9b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 22:59:08.296223 containerd[1608]: 2025-12-13 22:59:08.290 [INFO][4522] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="85eb39df12d26284d819f33a786c938d4ce5df83a68e8c78d6985a4a00d15a7c" Namespace="calico-apiserver" Pod="calico-apiserver-7d74677ddd-xdvvs" WorkloadEndpoint="localhost-k8s-calico--apiserver--7d74677ddd--xdvvs-eth0" Dec 13 22:59:08.305000 audit[4626]: NETFILTER_CFG table=filter:136 family=2 entries=41 op=nft_register_chain pid=4626 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 22:59:08.305000 audit[4626]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23096 a0=3 a1=ffffe7366ae0 a2=0 a3=ffff95f13fa8 items=0 ppid=4052 pid=4626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:08.305000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 22:59:08.321205 containerd[1608]: time="2025-12-13T22:59:08.321154015Z" level=info msg="connecting to shim 85eb39df12d26284d819f33a786c938d4ce5df83a68e8c78d6985a4a00d15a7c" address="unix:///run/containerd/s/853c23dc0fc229d3cac16a8ea3b1685ef5480e57a5199a33c20513723955ab4e" namespace=k8s.io protocol=ttrpc version=3 Dec 13 22:59:08.335045 containerd[1608]: time="2025-12-13T22:59:08.335009996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d74677ddd-w5hbt,Uid:35b16e82-6b93-4598-9c60-bbc71d0b419a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2cac9407a6cc6b324d363b64316ed599717d975428fd593882e54638b4758b16\"" Dec 13 22:59:08.337637 containerd[1608]: time="2025-12-13T22:59:08.337610193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 13 22:59:08.346098 systemd[1]: Started cri-containerd-85eb39df12d26284d819f33a786c938d4ce5df83a68e8c78d6985a4a00d15a7c.scope - libcontainer container 85eb39df12d26284d819f33a786c938d4ce5df83a68e8c78d6985a4a00d15a7c. Dec 13 22:59:08.356000 audit: BPF prog-id=235 op=LOAD Dec 13 22:59:08.357000 audit: BPF prog-id=236 op=LOAD Dec 13 22:59:08.357000 audit[4653]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4635 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:08.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835656233396466313264323632383464383139663333613738366339 Dec 13 22:59:08.357000 audit: BPF prog-id=236 op=UNLOAD Dec 13 22:59:08.357000 audit[4653]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4635 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:08.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835656233396466313264323632383464383139663333613738366339 Dec 13 22:59:08.357000 audit: BPF prog-id=237 op=LOAD Dec 13 22:59:08.357000 audit[4653]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4635 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:08.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835656233396466313264323632383464383139663333613738366339 Dec 13 22:59:08.357000 audit: BPF prog-id=238 op=LOAD Dec 13 22:59:08.357000 audit[4653]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4635 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:08.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835656233396466313264323632383464383139663333613738366339 Dec 13 22:59:08.357000 audit: BPF prog-id=238 op=UNLOAD Dec 13 22:59:08.357000 audit[4653]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4635 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:08.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835656233396466313264323632383464383139663333613738366339 Dec 13 22:59:08.357000 audit: BPF prog-id=237 op=UNLOAD Dec 13 22:59:08.357000 audit[4653]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4635 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:08.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835656233396466313264323632383464383139663333613738366339 Dec 13 22:59:08.357000 audit: BPF prog-id=239 op=LOAD Dec 13 22:59:08.357000 audit[4653]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4635 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:08.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835656233396466313264323632383464383139663333613738366339 Dec 13 22:59:08.359845 systemd-resolved[1249]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 22:59:08.380672 containerd[1608]: time="2025-12-13T22:59:08.380391975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d74677ddd-xdvvs,Uid:ef0b8707-0b6c-4822-b958-0aa6bda67c50,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"85eb39df12d26284d819f33a786c938d4ce5df83a68e8c78d6985a4a00d15a7c\"" Dec 13 22:59:08.532035 containerd[1608]: time="2025-12-13T22:59:08.531906492Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 22:59:08.533814 containerd[1608]: time="2025-12-13T22:59:08.533773249Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 13 22:59:08.533966 containerd[1608]: time="2025-12-13T22:59:08.533929049Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 13 22:59:08.534414 kubelet[2752]: E1213 22:59:08.534210 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 22:59:08.534414 kubelet[2752]: E1213 22:59:08.534261 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 22:59:08.534547 kubelet[2752]: E1213 22:59:08.534490 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kbqbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7d74677ddd-w5hbt_calico-apiserver(35b16e82-6b93-4598-9c60-bbc71d0b419a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 13 22:59:08.534912 containerd[1608]: time="2025-12-13T22:59:08.534859448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 13 22:59:08.536095 kubelet[2752]: E1213 22:59:08.536010 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d74677ddd-w5hbt" podUID="35b16e82-6b93-4598-9c60-bbc71d0b419a" Dec 13 22:59:08.738627 containerd[1608]: time="2025-12-13T22:59:08.738538294Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 22:59:08.745703 systemd-networkd[1290]: calid534370cbb2: Gained IPv6LL Dec 13 22:59:08.751332 containerd[1608]: time="2025-12-13T22:59:08.751286397Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 13 22:59:08.751421 containerd[1608]: time="2025-12-13T22:59:08.751314116Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 13 22:59:08.751591 kubelet[2752]: E1213 22:59:08.751514 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 22:59:08.751644 kubelet[2752]: E1213 22:59:08.751589 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 22:59:08.751751 kubelet[2752]: E1213 22:59:08.751709 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6hm9z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7d74677ddd-xdvvs_calico-apiserver(ef0b8707-0b6c-4822-b958-0aa6bda67c50): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 13 22:59:08.752939 kubelet[2752]: E1213 22:59:08.752899 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d74677ddd-xdvvs" podUID="ef0b8707-0b6c-4822-b958-0aa6bda67c50" Dec 13 22:59:09.026265 kubelet[2752]: E1213 22:59:09.026240 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:59:09.026605 containerd[1608]: time="2025-12-13T22:59:09.026570828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dc99d86df-qrdhj,Uid:de321aa1-ac48-4c1f-9093-69f544b891b9,Namespace:calico-system,Attempt:0,}" Dec 13 22:59:09.027235 containerd[1608]: time="2025-12-13T22:59:09.026690828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w8xcm,Uid:1927b80b-e803-4102-9a4e-f1521044c395,Namespace:kube-system,Attempt:0,}" Dec 13 22:59:09.027235 containerd[1608]: time="2025-12-13T22:59:09.026837268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8dxl2,Uid:e6e20487-ee64-4317-b075-5244d40e7b5a,Namespace:calico-system,Attempt:0,}" Dec 13 22:59:09.159766 systemd-networkd[1290]: calid749d569a3f: Link UP Dec 13 22:59:09.160152 systemd-networkd[1290]: calid749d569a3f: Gained carrier Dec 13 22:59:09.174951 containerd[1608]: 2025-12-13 22:59:09.081 [INFO][4699] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--w8xcm-eth0 coredns-668d6bf9bc- kube-system 1927b80b-e803-4102-9a4e-f1521044c395 838 0 2025-12-13 22:58:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-w8xcm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid749d569a3f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ffd7796b106336630a807eede58679ffd6e2ff463062a29ef6dc3ff146ee197c" Namespace="kube-system" Pod="coredns-668d6bf9bc-w8xcm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--w8xcm-" Dec 13 22:59:09.174951 containerd[1608]: 2025-12-13 22:59:09.081 [INFO][4699] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ffd7796b106336630a807eede58679ffd6e2ff463062a29ef6dc3ff146ee197c" Namespace="kube-system" Pod="coredns-668d6bf9bc-w8xcm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--w8xcm-eth0" Dec 13 22:59:09.174951 containerd[1608]: 2025-12-13 22:59:09.117 [INFO][4732] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ffd7796b106336630a807eede58679ffd6e2ff463062a29ef6dc3ff146ee197c" HandleID="k8s-pod-network.ffd7796b106336630a807eede58679ffd6e2ff463062a29ef6dc3ff146ee197c" Workload="localhost-k8s-coredns--668d6bf9bc--w8xcm-eth0" Dec 13 22:59:09.174951 containerd[1608]: 2025-12-13 22:59:09.117 [INFO][4732] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ffd7796b106336630a807eede58679ffd6e2ff463062a29ef6dc3ff146ee197c" HandleID="k8s-pod-network.ffd7796b106336630a807eede58679ffd6e2ff463062a29ef6dc3ff146ee197c" Workload="localhost-k8s-coredns--668d6bf9bc--w8xcm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d6e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-w8xcm", "timestamp":"2025-12-13 22:59:09.117703394 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 22:59:09.174951 containerd[1608]: 2025-12-13 22:59:09.118 [INFO][4732] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 22:59:09.174951 containerd[1608]: 2025-12-13 22:59:09.118 [INFO][4732] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 22:59:09.174951 containerd[1608]: 2025-12-13 22:59:09.118 [INFO][4732] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 22:59:09.174951 containerd[1608]: 2025-12-13 22:59:09.128 [INFO][4732] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ffd7796b106336630a807eede58679ffd6e2ff463062a29ef6dc3ff146ee197c" host="localhost" Dec 13 22:59:09.174951 containerd[1608]: 2025-12-13 22:59:09.132 [INFO][4732] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 22:59:09.174951 containerd[1608]: 2025-12-13 22:59:09.137 [INFO][4732] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 22:59:09.174951 containerd[1608]: 2025-12-13 22:59:09.139 [INFO][4732] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 22:59:09.174951 containerd[1608]: 2025-12-13 22:59:09.141 [INFO][4732] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 22:59:09.174951 containerd[1608]: 2025-12-13 22:59:09.142 [INFO][4732] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ffd7796b106336630a807eede58679ffd6e2ff463062a29ef6dc3ff146ee197c" host="localhost" Dec 13 22:59:09.174951 containerd[1608]: 2025-12-13 22:59:09.143 [INFO][4732] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ffd7796b106336630a807eede58679ffd6e2ff463062a29ef6dc3ff146ee197c Dec 13 22:59:09.174951 containerd[1608]: 2025-12-13 22:59:09.147 [INFO][4732] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ffd7796b106336630a807eede58679ffd6e2ff463062a29ef6dc3ff146ee197c" host="localhost" Dec 13 22:59:09.174951 containerd[1608]: 2025-12-13 22:59:09.152 [INFO][4732] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.ffd7796b106336630a807eede58679ffd6e2ff463062a29ef6dc3ff146ee197c" host="localhost" Dec 13 22:59:09.174951 containerd[1608]: 2025-12-13 22:59:09.152 [INFO][4732] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.ffd7796b106336630a807eede58679ffd6e2ff463062a29ef6dc3ff146ee197c" host="localhost" Dec 13 22:59:09.174951 containerd[1608]: 2025-12-13 22:59:09.152 [INFO][4732] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 22:59:09.174951 containerd[1608]: 2025-12-13 22:59:09.152 [INFO][4732] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="ffd7796b106336630a807eede58679ffd6e2ff463062a29ef6dc3ff146ee197c" HandleID="k8s-pod-network.ffd7796b106336630a807eede58679ffd6e2ff463062a29ef6dc3ff146ee197c" Workload="localhost-k8s-coredns--668d6bf9bc--w8xcm-eth0" Dec 13 22:59:09.175442 containerd[1608]: 2025-12-13 22:59:09.155 [INFO][4699] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ffd7796b106336630a807eede58679ffd6e2ff463062a29ef6dc3ff146ee197c" Namespace="kube-system" Pod="coredns-668d6bf9bc-w8xcm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--w8xcm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--w8xcm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1927b80b-e803-4102-9a4e-f1521044c395", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 22, 58, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-w8xcm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid749d569a3f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 22:59:09.175442 containerd[1608]: 2025-12-13 22:59:09.155 [INFO][4699] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="ffd7796b106336630a807eede58679ffd6e2ff463062a29ef6dc3ff146ee197c" Namespace="kube-system" Pod="coredns-668d6bf9bc-w8xcm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--w8xcm-eth0" Dec 13 22:59:09.175442 containerd[1608]: 2025-12-13 22:59:09.155 [INFO][4699] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid749d569a3f ContainerID="ffd7796b106336630a807eede58679ffd6e2ff463062a29ef6dc3ff146ee197c" Namespace="kube-system" Pod="coredns-668d6bf9bc-w8xcm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--w8xcm-eth0" Dec 13 22:59:09.175442 containerd[1608]: 2025-12-13 22:59:09.161 [INFO][4699] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ffd7796b106336630a807eede58679ffd6e2ff463062a29ef6dc3ff146ee197c" Namespace="kube-system" Pod="coredns-668d6bf9bc-w8xcm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--w8xcm-eth0" Dec 13 22:59:09.175442 containerd[1608]: 2025-12-13 22:59:09.162 [INFO][4699] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ffd7796b106336630a807eede58679ffd6e2ff463062a29ef6dc3ff146ee197c" Namespace="kube-system" Pod="coredns-668d6bf9bc-w8xcm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--w8xcm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--w8xcm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1927b80b-e803-4102-9a4e-f1521044c395", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 22, 58, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ffd7796b106336630a807eede58679ffd6e2ff463062a29ef6dc3ff146ee197c", Pod:"coredns-668d6bf9bc-w8xcm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid749d569a3f", MAC:"0e:4a:54:ba:29:0d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 22:59:09.175442 containerd[1608]: 2025-12-13 22:59:09.173 [INFO][4699] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ffd7796b106336630a807eede58679ffd6e2ff463062a29ef6dc3ff146ee197c" Namespace="kube-system" Pod="coredns-668d6bf9bc-w8xcm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--w8xcm-eth0" Dec 13 22:59:09.182081 kubelet[2752]: E1213 22:59:09.181805 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d74677ddd-xdvvs" podUID="ef0b8707-0b6c-4822-b958-0aa6bda67c50" Dec 13 22:59:09.184263 kubelet[2752]: E1213 22:59:09.184231 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:59:09.185248 kubelet[2752]: E1213 22:59:09.185197 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d74677ddd-w5hbt" podUID="35b16e82-6b93-4598-9c60-bbc71d0b419a" Dec 13 22:59:09.187241 kubelet[2752]: E1213 22:59:09.187195 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ddf58565-58qbn" podUID="bc906328-edba-4f47-8190-6385bd5de6a4" Dec 13 22:59:09.189000 audit[4759]: NETFILTER_CFG table=filter:137 family=2 entries=40 op=nft_register_chain pid=4759 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 22:59:09.189000 audit[4759]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20368 a0=3 a1=fffff14fbe90 a2=0 a3=ffff8d4f9fa8 items=0 ppid=4052 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:09.189000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 22:59:09.216000 audit[4767]: NETFILTER_CFG table=filter:138 family=2 entries=14 op=nft_register_rule pid=4767 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:59:09.216000 audit[4767]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd104f440 a2=0 a3=1 items=0 ppid=2863 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:09.216000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:59:09.220000 audit[4767]: NETFILTER_CFG table=nat:139 family=2 entries=20 op=nft_register_rule pid=4767 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:59:09.222677 containerd[1608]: time="2025-12-13T22:59:09.222637461Z" level=info msg="connecting to shim ffd7796b106336630a807eede58679ffd6e2ff463062a29ef6dc3ff146ee197c" address="unix:///run/containerd/s/f5c1b8d67ac11632dd7929e032490d8d650292c7f69af737dcdf8fa648592f5d" namespace=k8s.io protocol=ttrpc version=3 Dec 13 22:59:09.220000 audit[4767]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd104f440 a2=0 a3=1 items=0 ppid=2863 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:09.220000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:59:09.252809 systemd[1]: Started cri-containerd-ffd7796b106336630a807eede58679ffd6e2ff463062a29ef6dc3ff146ee197c.scope - libcontainer container ffd7796b106336630a807eede58679ffd6e2ff463062a29ef6dc3ff146ee197c. Dec 13 22:59:09.267000 audit: BPF prog-id=240 op=LOAD Dec 13 22:59:09.268000 audit: BPF prog-id=241 op=LOAD Dec 13 22:59:09.268000 audit[4782]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4770 pid=4782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:09.268000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666643737393662313036333336363330613830376565646535383637 Dec 13 22:59:09.268000 audit: BPF prog-id=241 op=UNLOAD Dec 13 22:59:09.268000 audit[4782]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4770 pid=4782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:09.268000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666643737393662313036333336363330613830376565646535383637 Dec 13 22:59:09.268000 audit: BPF prog-id=242 op=LOAD Dec 13 22:59:09.268000 audit[4782]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4770 pid=4782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:09.268000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666643737393662313036333336363330613830376565646535383637 Dec 13 22:59:09.268000 audit: BPF prog-id=243 op=LOAD Dec 13 22:59:09.268000 audit[4782]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4770 pid=4782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:09.268000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666643737393662313036333336363330613830376565646535383637 Dec 13 22:59:09.268000 audit: BPF prog-id=243 op=UNLOAD Dec 13 22:59:09.268000 audit[4782]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4770 pid=4782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:09.268000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666643737393662313036333336363330613830376565646535383637 Dec 13 22:59:09.268000 audit: BPF prog-id=242 op=UNLOAD Dec 13 22:59:09.268000 audit[4782]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4770 pid=4782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:09.268000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666643737393662313036333336363330613830376565646535383637 Dec 13 22:59:09.268000 audit: BPF prog-id=244 op=LOAD Dec 13 22:59:09.268000 audit[4782]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4770 pid=4782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:09.268000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666643737393662313036333336363330613830376565646535383637 Dec 13 22:59:09.271083 systemd-resolved[1249]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 22:59:09.272344 systemd-networkd[1290]: cali8f5d574105b: Link UP Dec 13 22:59:09.272583 systemd-networkd[1290]: cali8f5d574105b: Gained carrier Dec 13 22:59:09.290729 containerd[1608]: 2025-12-13 22:59:09.077 [INFO][4687] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--8dxl2-eth0 csi-node-driver- calico-system e6e20487-ee64-4317-b075-5244d40e7b5a 734 0 2025-12-13 22:58:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-8dxl2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8f5d574105b [] [] }} ContainerID="7b405879484c057899b49d23870543cd36ee99d3f6e0205a32fc8594074cb8ca" Namespace="calico-system" Pod="csi-node-driver-8dxl2" WorkloadEndpoint="localhost-k8s-csi--node--driver--8dxl2-" Dec 13 22:59:09.290729 containerd[1608]: 2025-12-13 22:59:09.078 [INFO][4687] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7b405879484c057899b49d23870543cd36ee99d3f6e0205a32fc8594074cb8ca" Namespace="calico-system" Pod="csi-node-driver-8dxl2" WorkloadEndpoint="localhost-k8s-csi--node--driver--8dxl2-eth0" Dec 13 22:59:09.290729 containerd[1608]: 2025-12-13 22:59:09.119 [INFO][4725] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7b405879484c057899b49d23870543cd36ee99d3f6e0205a32fc8594074cb8ca" HandleID="k8s-pod-network.7b405879484c057899b49d23870543cd36ee99d3f6e0205a32fc8594074cb8ca" Workload="localhost-k8s-csi--node--driver--8dxl2-eth0" Dec 13 22:59:09.290729 containerd[1608]: 2025-12-13 22:59:09.119 [INFO][4725] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7b405879484c057899b49d23870543cd36ee99d3f6e0205a32fc8594074cb8ca" HandleID="k8s-pod-network.7b405879484c057899b49d23870543cd36ee99d3f6e0205a32fc8594074cb8ca" Workload="localhost-k8s-csi--node--driver--8dxl2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d30e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-8dxl2", "timestamp":"2025-12-13 22:59:09.119206872 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 22:59:09.290729 containerd[1608]: 2025-12-13 22:59:09.119 [INFO][4725] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 22:59:09.290729 containerd[1608]: 2025-12-13 22:59:09.152 [INFO][4725] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 22:59:09.290729 containerd[1608]: 2025-12-13 22:59:09.153 [INFO][4725] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 22:59:09.290729 containerd[1608]: 2025-12-13 22:59:09.231 [INFO][4725] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7b405879484c057899b49d23870543cd36ee99d3f6e0205a32fc8594074cb8ca" host="localhost" Dec 13 22:59:09.290729 containerd[1608]: 2025-12-13 22:59:09.237 [INFO][4725] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 22:59:09.290729 containerd[1608]: 2025-12-13 22:59:09.242 [INFO][4725] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 22:59:09.290729 containerd[1608]: 2025-12-13 22:59:09.244 [INFO][4725] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 22:59:09.290729 containerd[1608]: 2025-12-13 22:59:09.246 [INFO][4725] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 22:59:09.290729 containerd[1608]: 2025-12-13 22:59:09.246 [INFO][4725] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7b405879484c057899b49d23870543cd36ee99d3f6e0205a32fc8594074cb8ca" host="localhost" Dec 13 22:59:09.290729 containerd[1608]: 2025-12-13 22:59:09.249 [INFO][4725] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7b405879484c057899b49d23870543cd36ee99d3f6e0205a32fc8594074cb8ca Dec 13 22:59:09.290729 containerd[1608]: 2025-12-13 22:59:09.256 [INFO][4725] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7b405879484c057899b49d23870543cd36ee99d3f6e0205a32fc8594074cb8ca" host="localhost" Dec 13 22:59:09.290729 containerd[1608]: 2025-12-13 22:59:09.264 [INFO][4725] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.7b405879484c057899b49d23870543cd36ee99d3f6e0205a32fc8594074cb8ca" host="localhost" Dec 13 22:59:09.290729 containerd[1608]: 2025-12-13 22:59:09.264 [INFO][4725] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.7b405879484c057899b49d23870543cd36ee99d3f6e0205a32fc8594074cb8ca" host="localhost" Dec 13 22:59:09.290729 containerd[1608]: 2025-12-13 22:59:09.264 [INFO][4725] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 22:59:09.290729 containerd[1608]: 2025-12-13 22:59:09.264 [INFO][4725] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="7b405879484c057899b49d23870543cd36ee99d3f6e0205a32fc8594074cb8ca" HandleID="k8s-pod-network.7b405879484c057899b49d23870543cd36ee99d3f6e0205a32fc8594074cb8ca" Workload="localhost-k8s-csi--node--driver--8dxl2-eth0" Dec 13 22:59:09.291518 containerd[1608]: 2025-12-13 22:59:09.267 [INFO][4687] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7b405879484c057899b49d23870543cd36ee99d3f6e0205a32fc8594074cb8ca" Namespace="calico-system" Pod="csi-node-driver-8dxl2" WorkloadEndpoint="localhost-k8s-csi--node--driver--8dxl2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8dxl2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e6e20487-ee64-4317-b075-5244d40e7b5a", ResourceVersion:"734", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 22, 58, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-8dxl2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8f5d574105b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 22:59:09.291518 containerd[1608]: 2025-12-13 22:59:09.267 [INFO][4687] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="7b405879484c057899b49d23870543cd36ee99d3f6e0205a32fc8594074cb8ca" Namespace="calico-system" Pod="csi-node-driver-8dxl2" WorkloadEndpoint="localhost-k8s-csi--node--driver--8dxl2-eth0" Dec 13 22:59:09.291518 containerd[1608]: 2025-12-13 22:59:09.267 [INFO][4687] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f5d574105b ContainerID="7b405879484c057899b49d23870543cd36ee99d3f6e0205a32fc8594074cb8ca" Namespace="calico-system" Pod="csi-node-driver-8dxl2" WorkloadEndpoint="localhost-k8s-csi--node--driver--8dxl2-eth0" Dec 13 22:59:09.291518 containerd[1608]: 2025-12-13 22:59:09.272 [INFO][4687] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7b405879484c057899b49d23870543cd36ee99d3f6e0205a32fc8594074cb8ca" Namespace="calico-system" Pod="csi-node-driver-8dxl2" WorkloadEndpoint="localhost-k8s-csi--node--driver--8dxl2-eth0" Dec 13 22:59:09.291518 containerd[1608]: 2025-12-13 22:59:09.273 [INFO][4687] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7b405879484c057899b49d23870543cd36ee99d3f6e0205a32fc8594074cb8ca" Namespace="calico-system" Pod="csi-node-driver-8dxl2" WorkloadEndpoint="localhost-k8s-csi--node--driver--8dxl2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8dxl2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e6e20487-ee64-4317-b075-5244d40e7b5a", ResourceVersion:"734", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 22, 58, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7b405879484c057899b49d23870543cd36ee99d3f6e0205a32fc8594074cb8ca", Pod:"csi-node-driver-8dxl2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8f5d574105b", MAC:"be:39:35:1e:f0:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 22:59:09.291518 containerd[1608]: 2025-12-13 22:59:09.286 [INFO][4687] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7b405879484c057899b49d23870543cd36ee99d3f6e0205a32fc8594074cb8ca" Namespace="calico-system" Pod="csi-node-driver-8dxl2" WorkloadEndpoint="localhost-k8s-csi--node--driver--8dxl2-eth0" Dec 13 22:59:09.300454 containerd[1608]: time="2025-12-13T22:59:09.300340923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w8xcm,Uid:1927b80b-e803-4102-9a4e-f1521044c395,Namespace:kube-system,Attempt:0,} returns sandbox id \"ffd7796b106336630a807eede58679ffd6e2ff463062a29ef6dc3ff146ee197c\"" Dec 13 22:59:09.301538 kubelet[2752]: E1213 22:59:09.301287 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:59:09.304164 containerd[1608]: time="2025-12-13T22:59:09.304095759Z" level=info msg="CreateContainer within sandbox \"ffd7796b106336630a807eede58679ffd6e2ff463062a29ef6dc3ff146ee197c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 22:59:09.305000 audit[4818]: NETFILTER_CFG table=filter:140 family=2 entries=44 op=nft_register_chain pid=4818 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 22:59:09.305000 audit[4818]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21936 a0=3 a1=fffff9011010 a2=0 a3=ffffa8dcffa8 items=0 ppid=4052 pid=4818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:09.305000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 22:59:09.312779 containerd[1608]: time="2025-12-13T22:59:09.312747228Z" level=info msg="Container 1e487c49783bf2f8c93cb403582d16eda09a240282061bbf01b0b2a584adb56d: CDI devices from CRI Config.CDIDevices: []" Dec 13 22:59:09.315964 containerd[1608]: time="2025-12-13T22:59:09.315669984Z" level=info msg="connecting to shim 7b405879484c057899b49d23870543cd36ee99d3f6e0205a32fc8594074cb8ca" address="unix:///run/containerd/s/80610502caa5fff9e117bae06287d6112e22a806cdea9128bcf5f2a091f9088a" namespace=k8s.io protocol=ttrpc version=3 Dec 13 22:59:09.320891 containerd[1608]: time="2025-12-13T22:59:09.320842057Z" level=info msg="CreateContainer within sandbox \"ffd7796b106336630a807eede58679ffd6e2ff463062a29ef6dc3ff146ee197c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1e487c49783bf2f8c93cb403582d16eda09a240282061bbf01b0b2a584adb56d\"" Dec 13 22:59:09.321610 containerd[1608]: time="2025-12-13T22:59:09.321571616Z" level=info msg="StartContainer for \"1e487c49783bf2f8c93cb403582d16eda09a240282061bbf01b0b2a584adb56d\"" Dec 13 22:59:09.322502 containerd[1608]: time="2025-12-13T22:59:09.322476935Z" level=info msg="connecting to shim 1e487c49783bf2f8c93cb403582d16eda09a240282061bbf01b0b2a584adb56d" address="unix:///run/containerd/s/f5c1b8d67ac11632dd7929e032490d8d650292c7f69af737dcdf8fa648592f5d" protocol=ttrpc version=3 Dec 13 22:59:09.342805 systemd[1]: Started cri-containerd-7b405879484c057899b49d23870543cd36ee99d3f6e0205a32fc8594074cb8ca.scope - libcontainer container 7b405879484c057899b49d23870543cd36ee99d3f6e0205a32fc8594074cb8ca. Dec 13 22:59:09.346465 systemd[1]: Started cri-containerd-1e487c49783bf2f8c93cb403582d16eda09a240282061bbf01b0b2a584adb56d.scope - libcontainer container 1e487c49783bf2f8c93cb403582d16eda09a240282061bbf01b0b2a584adb56d. Dec 13 22:59:09.362000 audit: BPF prog-id=245 op=LOAD Dec 13 22:59:09.363000 audit: BPF prog-id=246 op=LOAD Dec 13 22:59:09.363000 audit[4840]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4827 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:09.363000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762343035383739343834633035373839396234396432333837303534 Dec 13 22:59:09.364000 audit: BPF prog-id=246 op=UNLOAD Dec 13 22:59:09.364000 audit[4840]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4827 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:09.364000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762343035383739343834633035373839396234396432333837303534 Dec 13 22:59:09.364000 audit: BPF prog-id=247 op=LOAD Dec 13 22:59:09.364000 audit[4840]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4827 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:09.364000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762343035383739343834633035373839396234396432333837303534 Dec 13 22:59:09.364000 audit: BPF prog-id=248 op=LOAD Dec 13 22:59:09.364000 audit[4840]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4827 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:09.364000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762343035383739343834633035373839396234396432333837303534 Dec 13 22:59:09.364000 audit: BPF prog-id=248 op=UNLOAD Dec 13 22:59:09.364000 audit[4840]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4827 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:09.364000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762343035383739343834633035373839396234396432333837303534 Dec 13 22:59:09.364000 audit: BPF prog-id=247 op=UNLOAD Dec 13 22:59:09.364000 audit[4840]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4827 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:09.364000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762343035383739343834633035373839396234396432333837303534 Dec 13 22:59:09.364000 audit: BPF prog-id=249 op=LOAD Dec 13 22:59:09.364000 audit[4840]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4827 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:09.364000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3762343035383739343834633035373839396234396432333837303534 Dec 13 22:59:09.366529 systemd-resolved[1249]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 22:59:09.367000 audit: BPF prog-id=250 op=LOAD Dec 13 22:59:09.369589 systemd-networkd[1290]: cali4b86ade2cff: Link UP Dec 13 22:59:09.368000 audit: BPF prog-id=251 op=LOAD Dec 13 22:59:09.368000 audit[4841]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=4770 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:09.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165343837633439373833626632663863393363623430333538326431 Dec 13 22:59:09.369000 audit: BPF prog-id=251 op=UNLOAD Dec 13 22:59:09.369000 audit[4841]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4770 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:09.370761 systemd-networkd[1290]: cali4b86ade2cff: Gained carrier Dec 13 22:59:09.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165343837633439373833626632663863393363623430333538326431 Dec 13 22:59:09.369000 audit: BPF prog-id=252 op=LOAD Dec 13 22:59:09.369000 audit[4841]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=4770 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:09.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165343837633439373833626632663863393363623430333538326431 Dec 13 22:59:09.370000 audit: BPF prog-id=253 op=LOAD Dec 13 22:59:09.370000 audit[4841]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=4770 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:09.370000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165343837633439373833626632663863393363623430333538326431 Dec 13 22:59:09.370000 audit: BPF prog-id=253 op=UNLOAD Dec 13 22:59:09.370000 audit[4841]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4770 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:09.370000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165343837633439373833626632663863393363623430333538326431 Dec 13 22:59:09.370000 audit: BPF prog-id=252 op=UNLOAD Dec 13 22:59:09.370000 audit[4841]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4770 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:09.370000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165343837633439373833626632663863393363623430333538326431 Dec 13 22:59:09.370000 audit: BPF prog-id=254 op=LOAD Dec 13 22:59:09.370000 audit[4841]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=4770 pid=4841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:09.370000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165343837633439373833626632663863393363623430333538326431 Dec 13 22:59:09.392296 containerd[1608]: time="2025-12-13T22:59:09.392249527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8dxl2,Uid:e6e20487-ee64-4317-b075-5244d40e7b5a,Namespace:calico-system,Attempt:0,} returns sandbox id \"7b405879484c057899b49d23870543cd36ee99d3f6e0205a32fc8594074cb8ca\"" Dec 13 22:59:09.397164 containerd[1608]: 2025-12-13 22:59:09.080 [INFO][4681] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7dc99d86df--qrdhj-eth0 calico-kube-controllers-7dc99d86df- calico-system de321aa1-ac48-4c1f-9093-69f544b891b9 844 0 2025-12-13 22:58:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7dc99d86df projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7dc99d86df-qrdhj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4b86ade2cff [] [] }} ContainerID="8e7865961617453f04ea903732c2f67c3b1ae62d21411c7c23f92cd38991816c" Namespace="calico-system" Pod="calico-kube-controllers-7dc99d86df-qrdhj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dc99d86df--qrdhj-" Dec 13 22:59:09.397164 containerd[1608]: 2025-12-13 22:59:09.080 [INFO][4681] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8e7865961617453f04ea903732c2f67c3b1ae62d21411c7c23f92cd38991816c" Namespace="calico-system" Pod="calico-kube-controllers-7dc99d86df-qrdhj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dc99d86df--qrdhj-eth0" Dec 13 22:59:09.397164 containerd[1608]: 2025-12-13 22:59:09.129 [INFO][4737] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e7865961617453f04ea903732c2f67c3b1ae62d21411c7c23f92cd38991816c" HandleID="k8s-pod-network.8e7865961617453f04ea903732c2f67c3b1ae62d21411c7c23f92cd38991816c" Workload="localhost-k8s-calico--kube--controllers--7dc99d86df--qrdhj-eth0" Dec 13 22:59:09.397164 containerd[1608]: 2025-12-13 22:59:09.130 [INFO][4737] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8e7865961617453f04ea903732c2f67c3b1ae62d21411c7c23f92cd38991816c" HandleID="k8s-pod-network.8e7865961617453f04ea903732c2f67c3b1ae62d21411c7c23f92cd38991816c" Workload="localhost-k8s-calico--kube--controllers--7dc99d86df--qrdhj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000322140), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7dc99d86df-qrdhj", "timestamp":"2025-12-13 22:59:09.129288259 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 22:59:09.397164 containerd[1608]: 2025-12-13 22:59:09.130 [INFO][4737] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 22:59:09.397164 containerd[1608]: 2025-12-13 22:59:09.264 [INFO][4737] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 22:59:09.397164 containerd[1608]: 2025-12-13 22:59:09.264 [INFO][4737] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 22:59:09.397164 containerd[1608]: 2025-12-13 22:59:09.328 [INFO][4737] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8e7865961617453f04ea903732c2f67c3b1ae62d21411c7c23f92cd38991816c" host="localhost" Dec 13 22:59:09.397164 containerd[1608]: 2025-12-13 22:59:09.337 [INFO][4737] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 22:59:09.397164 containerd[1608]: 2025-12-13 22:59:09.342 [INFO][4737] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 22:59:09.397164 containerd[1608]: 2025-12-13 22:59:09.343 [INFO][4737] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 22:59:09.397164 containerd[1608]: 2025-12-13 22:59:09.347 [INFO][4737] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 22:59:09.397164 containerd[1608]: 2025-12-13 22:59:09.347 [INFO][4737] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8e7865961617453f04ea903732c2f67c3b1ae62d21411c7c23f92cd38991816c" host="localhost" Dec 13 22:59:09.397164 containerd[1608]: 2025-12-13 22:59:09.348 [INFO][4737] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8e7865961617453f04ea903732c2f67c3b1ae62d21411c7c23f92cd38991816c Dec 13 22:59:09.397164 containerd[1608]: 2025-12-13 22:59:09.354 [INFO][4737] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8e7865961617453f04ea903732c2f67c3b1ae62d21411c7c23f92cd38991816c" host="localhost" Dec 13 22:59:09.397164 containerd[1608]: 2025-12-13 22:59:09.362 [INFO][4737] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.8e7865961617453f04ea903732c2f67c3b1ae62d21411c7c23f92cd38991816c" host="localhost" Dec 13 22:59:09.397164 containerd[1608]: 2025-12-13 22:59:09.362 [INFO][4737] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.8e7865961617453f04ea903732c2f67c3b1ae62d21411c7c23f92cd38991816c" host="localhost" Dec 13 22:59:09.397164 containerd[1608]: 2025-12-13 22:59:09.362 [INFO][4737] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 22:59:09.397164 containerd[1608]: 2025-12-13 22:59:09.362 [INFO][4737] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="8e7865961617453f04ea903732c2f67c3b1ae62d21411c7c23f92cd38991816c" HandleID="k8s-pod-network.8e7865961617453f04ea903732c2f67c3b1ae62d21411c7c23f92cd38991816c" Workload="localhost-k8s-calico--kube--controllers--7dc99d86df--qrdhj-eth0" Dec 13 22:59:09.399030 containerd[1608]: 2025-12-13 22:59:09.366 [INFO][4681] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8e7865961617453f04ea903732c2f67c3b1ae62d21411c7c23f92cd38991816c" Namespace="calico-system" Pod="calico-kube-controllers-7dc99d86df-qrdhj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dc99d86df--qrdhj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7dc99d86df--qrdhj-eth0", GenerateName:"calico-kube-controllers-7dc99d86df-", Namespace:"calico-system", SelfLink:"", UID:"de321aa1-ac48-4c1f-9093-69f544b891b9", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 22, 58, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7dc99d86df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7dc99d86df-qrdhj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4b86ade2cff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 22:59:09.399030 containerd[1608]: 2025-12-13 22:59:09.366 [INFO][4681] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="8e7865961617453f04ea903732c2f67c3b1ae62d21411c7c23f92cd38991816c" Namespace="calico-system" Pod="calico-kube-controllers-7dc99d86df-qrdhj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dc99d86df--qrdhj-eth0" Dec 13 22:59:09.399030 containerd[1608]: 2025-12-13 22:59:09.366 [INFO][4681] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4b86ade2cff ContainerID="8e7865961617453f04ea903732c2f67c3b1ae62d21411c7c23f92cd38991816c" Namespace="calico-system" Pod="calico-kube-controllers-7dc99d86df-qrdhj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dc99d86df--qrdhj-eth0" Dec 13 22:59:09.399030 containerd[1608]: 2025-12-13 22:59:09.371 [INFO][4681] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e7865961617453f04ea903732c2f67c3b1ae62d21411c7c23f92cd38991816c" Namespace="calico-system" Pod="calico-kube-controllers-7dc99d86df-qrdhj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dc99d86df--qrdhj-eth0" Dec 13 22:59:09.399030 containerd[1608]: 2025-12-13 22:59:09.372 [INFO][4681] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8e7865961617453f04ea903732c2f67c3b1ae62d21411c7c23f92cd38991816c" Namespace="calico-system" Pod="calico-kube-controllers-7dc99d86df-qrdhj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dc99d86df--qrdhj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7dc99d86df--qrdhj-eth0", GenerateName:"calico-kube-controllers-7dc99d86df-", Namespace:"calico-system", SelfLink:"", UID:"de321aa1-ac48-4c1f-9093-69f544b891b9", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 22, 58, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7dc99d86df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8e7865961617453f04ea903732c2f67c3b1ae62d21411c7c23f92cd38991816c", Pod:"calico-kube-controllers-7dc99d86df-qrdhj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4b86ade2cff", MAC:"9e:c4:fe:13:16:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 22:59:09.399030 containerd[1608]: 2025-12-13 22:59:09.386 [INFO][4681] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8e7865961617453f04ea903732c2f67c3b1ae62d21411c7c23f92cd38991816c" Namespace="calico-system" Pod="calico-kube-controllers-7dc99d86df-qrdhj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7dc99d86df--qrdhj-eth0" Dec 13 22:59:09.401662 containerd[1608]: time="2025-12-13T22:59:09.401614796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 13 22:59:09.406154 containerd[1608]: time="2025-12-13T22:59:09.406117110Z" level=info msg="StartContainer for \"1e487c49783bf2f8c93cb403582d16eda09a240282061bbf01b0b2a584adb56d\" returns successfully" Dec 13 22:59:09.414000 audit[4903]: NETFILTER_CFG table=filter:141 family=2 entries=54 op=nft_register_chain pid=4903 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 22:59:09.414000 audit[4903]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25976 a0=3 a1=ffffde1b9420 a2=0 a3=ffffba743fa8 items=0 ppid=4052 pid=4903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:09.414000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 22:59:09.437726 containerd[1608]: time="2025-12-13T22:59:09.437682510Z" level=info msg="connecting to shim 8e7865961617453f04ea903732c2f67c3b1ae62d21411c7c23f92cd38991816c" address="unix:///run/containerd/s/b674cf7f97a179d2a66b8853679420192011ca59c3fcb7f00b4b4cd6be595947" namespace=k8s.io protocol=ttrpc version=3 Dec 13 22:59:09.467785 systemd[1]: Started cri-containerd-8e7865961617453f04ea903732c2f67c3b1ae62d21411c7c23f92cd38991816c.scope - libcontainer container 8e7865961617453f04ea903732c2f67c3b1ae62d21411c7c23f92cd38991816c. Dec 13 22:59:09.478000 audit: BPF prog-id=255 op=LOAD Dec 13 22:59:09.478000 audit: BPF prog-id=256 op=LOAD Dec 13 22:59:09.478000 audit[4927]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4916 pid=4927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:09.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865373836353936313631373435336630346561393033373332633266 Dec 13 22:59:09.478000 audit: BPF prog-id=256 op=UNLOAD Dec 13 22:59:09.478000 audit[4927]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4916 pid=4927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:09.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865373836353936313631373435336630346561393033373332633266 Dec 13 22:59:09.478000 audit: BPF prog-id=257 op=LOAD Dec 13 22:59:09.478000 audit[4927]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4916 pid=4927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:09.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865373836353936313631373435336630346561393033373332633266 Dec 13 22:59:09.478000 audit: BPF prog-id=258 op=LOAD Dec 13 22:59:09.478000 audit[4927]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4916 pid=4927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:09.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865373836353936313631373435336630346561393033373332633266 Dec 13 22:59:09.479000 audit: BPF prog-id=258 op=UNLOAD Dec 13 22:59:09.479000 audit[4927]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4916 pid=4927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:09.479000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865373836353936313631373435336630346561393033373332633266 Dec 13 22:59:09.479000 audit: BPF prog-id=257 op=UNLOAD Dec 13 22:59:09.479000 audit[4927]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4916 pid=4927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:09.479000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865373836353936313631373435336630346561393033373332633266 Dec 13 22:59:09.479000 audit: BPF prog-id=259 op=LOAD Dec 13 22:59:09.479000 audit[4927]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4916 pid=4927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:09.479000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865373836353936313631373435336630346561393033373332633266 Dec 13 22:59:09.481312 systemd-resolved[1249]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 22:59:09.502153 containerd[1608]: time="2025-12-13T22:59:09.502112629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dc99d86df-qrdhj,Uid:de321aa1-ac48-4c1f-9093-69f544b891b9,Namespace:calico-system,Attempt:0,} returns sandbox id \"8e7865961617453f04ea903732c2f67c3b1ae62d21411c7c23f92cd38991816c\"" Dec 13 22:59:09.577744 systemd-networkd[1290]: cali468d7476965: Gained IPv6LL Dec 13 22:59:09.595785 containerd[1608]: time="2025-12-13T22:59:09.595737991Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 22:59:09.596733 containerd[1608]: time="2025-12-13T22:59:09.596694390Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 13 22:59:09.596807 containerd[1608]: time="2025-12-13T22:59:09.596754629Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 13 22:59:09.597075 kubelet[2752]: E1213 22:59:09.596925 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 13 22:59:09.597319 kubelet[2752]: E1213 22:59:09.597148 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 13 22:59:09.597409 kubelet[2752]: E1213 22:59:09.597363 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxbwv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8dxl2_calico-system(e6e20487-ee64-4317-b075-5244d40e7b5a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 13 22:59:09.598985 containerd[1608]: time="2025-12-13T22:59:09.598876347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 13 22:59:09.813906 containerd[1608]: time="2025-12-13T22:59:09.813849996Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 22:59:09.814847 containerd[1608]: time="2025-12-13T22:59:09.814759595Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 13 22:59:09.815002 containerd[1608]: time="2025-12-13T22:59:09.814922234Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 13 22:59:09.815046 kubelet[2752]: E1213 22:59:09.814979 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 13 22:59:09.815046 kubelet[2752]: E1213 22:59:09.815022 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 13 22:59:09.815399 containerd[1608]: time="2025-12-13T22:59:09.815312514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 13 22:59:09.815464 kubelet[2752]: E1213 22:59:09.815343 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7v4vq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7dc99d86df-qrdhj_calico-system(de321aa1-ac48-4c1f-9093-69f544b891b9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 13 22:59:09.817173 kubelet[2752]: E1213 22:59:09.817141 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7dc99d86df-qrdhj" podUID="de321aa1-ac48-4c1f-9093-69f544b891b9" Dec 13 22:59:10.021671 containerd[1608]: time="2025-12-13T22:59:10.021619415Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 22:59:10.022986 containerd[1608]: time="2025-12-13T22:59:10.022939734Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 13 22:59:10.023086 containerd[1608]: time="2025-12-13T22:59:10.023029934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 13 22:59:10.023086 containerd[1608]: time="2025-12-13T22:59:10.023061134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vmghn,Uid:9fe07c94-e384-4193-9d4b-ef9d906eb265,Namespace:calico-system,Attempt:0,}" Dec 13 22:59:10.023165 kubelet[2752]: E1213 22:59:10.023111 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 13 22:59:10.023165 kubelet[2752]: E1213 22:59:10.023156 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 13 22:59:10.023301 kubelet[2752]: E1213 22:59:10.023263 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxbwv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8dxl2_calico-system(e6e20487-ee64-4317-b075-5244d40e7b5a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 13 22:59:10.024483 kubelet[2752]: E1213 22:59:10.024429 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8dxl2" podUID="e6e20487-ee64-4317-b075-5244d40e7b5a" Dec 13 22:59:10.123502 systemd-networkd[1290]: cali18da53e9d36: Link UP Dec 13 22:59:10.123867 systemd-networkd[1290]: cali18da53e9d36: Gained carrier Dec 13 22:59:10.141221 containerd[1608]: 2025-12-13 22:59:10.059 [INFO][4953] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--vmghn-eth0 goldmane-666569f655- calico-system 9fe07c94-e384-4193-9d4b-ef9d906eb265 843 0 2025-12-13 22:58:46 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-vmghn eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali18da53e9d36 [] [] }} ContainerID="a37d834f0ca679c04e11111efea580944808de0cb4b1e796b2a9728959b4aab4" Namespace="calico-system" Pod="goldmane-666569f655-vmghn" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vmghn-" Dec 13 22:59:10.141221 containerd[1608]: 2025-12-13 22:59:10.060 [INFO][4953] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a37d834f0ca679c04e11111efea580944808de0cb4b1e796b2a9728959b4aab4" Namespace="calico-system" Pod="goldmane-666569f655-vmghn" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vmghn-eth0" Dec 13 22:59:10.141221 containerd[1608]: 2025-12-13 22:59:10.083 [INFO][4969] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a37d834f0ca679c04e11111efea580944808de0cb4b1e796b2a9728959b4aab4" HandleID="k8s-pod-network.a37d834f0ca679c04e11111efea580944808de0cb4b1e796b2a9728959b4aab4" Workload="localhost-k8s-goldmane--666569f655--vmghn-eth0" Dec 13 22:59:10.141221 containerd[1608]: 2025-12-13 22:59:10.083 [INFO][4969] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a37d834f0ca679c04e11111efea580944808de0cb4b1e796b2a9728959b4aab4" HandleID="k8s-pod-network.a37d834f0ca679c04e11111efea580944808de0cb4b1e796b2a9728959b4aab4" Workload="localhost-k8s-goldmane--666569f655--vmghn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2140), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-vmghn", "timestamp":"2025-12-13 22:59:10.083152463 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 22:59:10.141221 containerd[1608]: 2025-12-13 22:59:10.083 [INFO][4969] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 22:59:10.141221 containerd[1608]: 2025-12-13 22:59:10.083 [INFO][4969] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 22:59:10.141221 containerd[1608]: 2025-12-13 22:59:10.083 [INFO][4969] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 22:59:10.141221 containerd[1608]: 2025-12-13 22:59:10.092 [INFO][4969] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a37d834f0ca679c04e11111efea580944808de0cb4b1e796b2a9728959b4aab4" host="localhost" Dec 13 22:59:10.141221 containerd[1608]: 2025-12-13 22:59:10.096 [INFO][4969] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 22:59:10.141221 containerd[1608]: 2025-12-13 22:59:10.101 [INFO][4969] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 22:59:10.141221 containerd[1608]: 2025-12-13 22:59:10.103 [INFO][4969] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 22:59:10.141221 containerd[1608]: 2025-12-13 22:59:10.106 [INFO][4969] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 22:59:10.141221 containerd[1608]: 2025-12-13 22:59:10.106 [INFO][4969] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a37d834f0ca679c04e11111efea580944808de0cb4b1e796b2a9728959b4aab4" host="localhost" Dec 13 22:59:10.141221 containerd[1608]: 2025-12-13 22:59:10.107 [INFO][4969] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a37d834f0ca679c04e11111efea580944808de0cb4b1e796b2a9728959b4aab4 Dec 13 22:59:10.141221 containerd[1608]: 2025-12-13 22:59:10.111 [INFO][4969] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a37d834f0ca679c04e11111efea580944808de0cb4b1e796b2a9728959b4aab4" host="localhost" Dec 13 22:59:10.141221 containerd[1608]: 2025-12-13 22:59:10.118 [INFO][4969] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.a37d834f0ca679c04e11111efea580944808de0cb4b1e796b2a9728959b4aab4" host="localhost" Dec 13 22:59:10.141221 containerd[1608]: 2025-12-13 22:59:10.118 [INFO][4969] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.a37d834f0ca679c04e11111efea580944808de0cb4b1e796b2a9728959b4aab4" host="localhost" Dec 13 22:59:10.141221 containerd[1608]: 2025-12-13 22:59:10.118 [INFO][4969] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 22:59:10.141221 containerd[1608]: 2025-12-13 22:59:10.118 [INFO][4969] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="a37d834f0ca679c04e11111efea580944808de0cb4b1e796b2a9728959b4aab4" HandleID="k8s-pod-network.a37d834f0ca679c04e11111efea580944808de0cb4b1e796b2a9728959b4aab4" Workload="localhost-k8s-goldmane--666569f655--vmghn-eth0" Dec 13 22:59:10.142291 containerd[1608]: 2025-12-13 22:59:10.121 [INFO][4953] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a37d834f0ca679c04e11111efea580944808de0cb4b1e796b2a9728959b4aab4" Namespace="calico-system" Pod="goldmane-666569f655-vmghn" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vmghn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--vmghn-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9fe07c94-e384-4193-9d4b-ef9d906eb265", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 22, 58, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-vmghn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali18da53e9d36", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 22:59:10.142291 containerd[1608]: 2025-12-13 22:59:10.121 [INFO][4953] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="a37d834f0ca679c04e11111efea580944808de0cb4b1e796b2a9728959b4aab4" Namespace="calico-system" Pod="goldmane-666569f655-vmghn" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vmghn-eth0" Dec 13 22:59:10.142291 containerd[1608]: 2025-12-13 22:59:10.121 [INFO][4953] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali18da53e9d36 ContainerID="a37d834f0ca679c04e11111efea580944808de0cb4b1e796b2a9728959b4aab4" Namespace="calico-system" Pod="goldmane-666569f655-vmghn" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vmghn-eth0" Dec 13 22:59:10.142291 containerd[1608]: 2025-12-13 22:59:10.123 [INFO][4953] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a37d834f0ca679c04e11111efea580944808de0cb4b1e796b2a9728959b4aab4" Namespace="calico-system" Pod="goldmane-666569f655-vmghn" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vmghn-eth0" Dec 13 22:59:10.142291 containerd[1608]: 2025-12-13 22:59:10.127 [INFO][4953] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a37d834f0ca679c04e11111efea580944808de0cb4b1e796b2a9728959b4aab4" Namespace="calico-system" Pod="goldmane-666569f655-vmghn" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vmghn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--vmghn-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9fe07c94-e384-4193-9d4b-ef9d906eb265", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 22, 58, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a37d834f0ca679c04e11111efea580944808de0cb4b1e796b2a9728959b4aab4", Pod:"goldmane-666569f655-vmghn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali18da53e9d36", MAC:"ba:3f:11:ba:97:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 22:59:10.142291 containerd[1608]: 2025-12-13 22:59:10.138 [INFO][4953] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a37d834f0ca679c04e11111efea580944808de0cb4b1e796b2a9728959b4aab4" Namespace="calico-system" Pod="goldmane-666569f655-vmghn" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vmghn-eth0" Dec 13 22:59:10.154000 audit[4985]: NETFILTER_CFG table=filter:142 family=2 entries=56 op=nft_register_chain pid=4985 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 22:59:10.154000 audit[4985]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28712 a0=3 a1=ffffdfb64e00 a2=0 a3=ffffaedcdfa8 items=0 ppid=4052 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:10.154000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 22:59:10.162898 containerd[1608]: time="2025-12-13T22:59:10.162794689Z" level=info msg="connecting to shim a37d834f0ca679c04e11111efea580944808de0cb4b1e796b2a9728959b4aab4" address="unix:///run/containerd/s/56cd03fb0e03ffbc1d4e13701a5943502d5f578dbd5b7c1d9db3d63c04f90721" namespace=k8s.io protocol=ttrpc version=3 Dec 13 22:59:10.191941 systemd[1]: Started cri-containerd-a37d834f0ca679c04e11111efea580944808de0cb4b1e796b2a9728959b4aab4.scope - libcontainer container a37d834f0ca679c04e11111efea580944808de0cb4b1e796b2a9728959b4aab4. Dec 13 22:59:10.193095 kubelet[2752]: E1213 22:59:10.192313 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:59:10.193634 kubelet[2752]: E1213 22:59:10.193597 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8dxl2" podUID="e6e20487-ee64-4317-b075-5244d40e7b5a" Dec 13 22:59:10.198332 kubelet[2752]: E1213 22:59:10.198299 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d74677ddd-xdvvs" podUID="ef0b8707-0b6c-4822-b958-0aa6bda67c50" Dec 13 22:59:10.198332 kubelet[2752]: E1213 22:59:10.198308 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7dc99d86df-qrdhj" podUID="de321aa1-ac48-4c1f-9093-69f544b891b9" Dec 13 22:59:10.198332 kubelet[2752]: E1213 22:59:10.198347 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d74677ddd-w5hbt" podUID="35b16e82-6b93-4598-9c60-bbc71d0b419a" Dec 13 22:59:10.211000 audit: BPF prog-id=260 op=LOAD Dec 13 22:59:10.212000 audit: BPF prog-id=261 op=LOAD Dec 13 22:59:10.212000 audit[5005]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=4994 pid=5005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:10.212000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133376438333466306361363739633034653131313131656665613538 Dec 13 22:59:10.212000 audit: BPF prog-id=261 op=UNLOAD Dec 13 22:59:10.212000 audit[5005]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4994 pid=5005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:10.212000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133376438333466306361363739633034653131313131656665613538 Dec 13 22:59:10.212000 audit: BPF prog-id=262 op=LOAD Dec 13 22:59:10.212000 audit[5005]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=4994 pid=5005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:10.212000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133376438333466306361363739633034653131313131656665613538 Dec 13 22:59:10.212000 audit: BPF prog-id=263 op=LOAD Dec 13 22:59:10.212000 audit[5005]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=4994 pid=5005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:10.212000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133376438333466306361363739633034653131313131656665613538 Dec 13 22:59:10.212000 audit: BPF prog-id=263 op=UNLOAD Dec 13 22:59:10.212000 audit[5005]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4994 pid=5005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:10.212000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133376438333466306361363739633034653131313131656665613538 Dec 13 22:59:10.212000 audit: BPF prog-id=262 op=UNLOAD Dec 13 22:59:10.212000 audit[5005]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4994 pid=5005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:10.212000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133376438333466306361363739633034653131313131656665613538 Dec 13 22:59:10.212000 audit: BPF prog-id=264 op=LOAD Dec 13 22:59:10.212000 audit[5005]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=4994 pid=5005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:10.212000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133376438333466306361363739633034653131313131656665613538 Dec 13 22:59:10.214807 systemd-resolved[1249]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 22:59:10.219613 systemd-networkd[1290]: caliddb8928886f: Gained IPv6LL Dec 13 22:59:10.244000 audit[5031]: NETFILTER_CFG table=filter:143 family=2 entries=14 op=nft_register_rule pid=5031 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:59:10.244000 audit[5031]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff44b72c0 a2=0 a3=1 items=0 ppid=2863 pid=5031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:10.244000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:59:10.252000 audit[5031]: NETFILTER_CFG table=nat:144 family=2 entries=20 op=nft_register_rule pid=5031 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:59:10.252000 audit[5031]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff44b72c0 a2=0 a3=1 items=0 ppid=2863 pid=5031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:10.252000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:59:10.256363 containerd[1608]: time="2025-12-13T22:59:10.256113938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vmghn,Uid:9fe07c94-e384-4193-9d4b-ef9d906eb265,Namespace:calico-system,Attempt:0,} returns sandbox id \"a37d834f0ca679c04e11111efea580944808de0cb4b1e796b2a9728959b4aab4\"" Dec 13 22:59:10.258191 containerd[1608]: time="2025-12-13T22:59:10.257633336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 13 22:59:10.263301 kubelet[2752]: I1213 22:59:10.263230 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-w8xcm" podStartSLOduration=41.26321389 podStartE2EDuration="41.26321389s" podCreationTimestamp="2025-12-13 22:58:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 22:59:10.249861666 +0000 UTC m=+47.363622110" watchObservedRunningTime="2025-12-13 22:59:10.26321389 +0000 UTC m=+47.376974334" Dec 13 22:59:10.471853 containerd[1608]: time="2025-12-13T22:59:10.471717123Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 22:59:10.473283 containerd[1608]: time="2025-12-13T22:59:10.473225842Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 13 22:59:10.473977 containerd[1608]: time="2025-12-13T22:59:10.473243082Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 13 22:59:10.474504 kubelet[2752]: E1213 22:59:10.474291 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 13 22:59:10.474504 kubelet[2752]: E1213 22:59:10.474351 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 13 22:59:10.474713 kubelet[2752]: E1213 22:59:10.474667 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vdcqr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vmghn_calico-system(9fe07c94-e384-4193-9d4b-ef9d906eb265): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 13 22:59:10.476006 kubelet[2752]: E1213 22:59:10.475967 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vmghn" podUID="9fe07c94-e384-4193-9d4b-ef9d906eb265" Dec 13 22:59:10.601808 systemd-networkd[1290]: cali4b86ade2cff: Gained IPv6LL Dec 13 22:59:10.665758 systemd-networkd[1290]: cali8f5d574105b: Gained IPv6LL Dec 13 22:59:10.729770 systemd-networkd[1290]: calid749d569a3f: Gained IPv6LL Dec 13 22:59:11.200310 kubelet[2752]: E1213 22:59:11.200254 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:59:11.201103 kubelet[2752]: E1213 22:59:11.201070 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vmghn" podUID="9fe07c94-e384-4193-9d4b-ef9d906eb265" Dec 13 22:59:11.201411 kubelet[2752]: E1213 22:59:11.201358 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7dc99d86df-qrdhj" podUID="de321aa1-ac48-4c1f-9093-69f544b891b9" Dec 13 22:59:11.202070 kubelet[2752]: E1213 22:59:11.202013 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8dxl2" podUID="e6e20487-ee64-4317-b075-5244d40e7b5a" Dec 13 22:59:11.235000 audit[5033]: NETFILTER_CFG table=filter:145 family=2 entries=14 op=nft_register_rule pid=5033 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:59:11.238270 kernel: kauditd_printk_skb: 275 callbacks suppressed Dec 13 22:59:11.238344 kernel: audit: type=1325 audit(1765666751.235:769): table=filter:145 family=2 entries=14 op=nft_register_rule pid=5033 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:59:11.235000 audit[5033]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffecad76f0 a2=0 a3=1 items=0 ppid=2863 pid=5033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:11.243548 kernel: audit: type=1300 audit(1765666751.235:769): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffecad76f0 a2=0 a3=1 items=0 ppid=2863 pid=5033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:11.243642 kernel: audit: type=1327 audit(1765666751.235:769): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:59:11.235000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:59:11.248000 audit[5033]: NETFILTER_CFG table=nat:146 family=2 entries=56 op=nft_register_chain pid=5033 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:59:11.248000 audit[5033]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffecad76f0 a2=0 a3=1 items=0 ppid=2863 pid=5033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:11.255289 kernel: audit: type=1325 audit(1765666751.248:770): table=nat:146 family=2 entries=56 op=nft_register_chain pid=5033 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:59:11.255351 kernel: audit: type=1300 audit(1765666751.248:770): arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffecad76f0 a2=0 a3=1 items=0 ppid=2863 pid=5033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:11.255374 kernel: audit: type=1327 audit(1765666751.248:770): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:59:11.248000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:59:11.561694 systemd-networkd[1290]: cali18da53e9d36: Gained IPv6LL Dec 13 22:59:12.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.10:22-10.0.0.1:37850 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:59:12.180910 systemd[1]: Started sshd@9-10.0.0.10:22-10.0.0.1:37850.service - OpenSSH per-connection server daemon (10.0.0.1:37850). Dec 13 22:59:12.184588 kernel: audit: type=1130 audit(1765666752.179:771): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.10:22-10.0.0.1:37850 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:59:12.207583 kubelet[2752]: E1213 22:59:12.207322 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:59:12.212147 kubelet[2752]: E1213 22:59:12.212107 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vmghn" podUID="9fe07c94-e384-4193-9d4b-ef9d906eb265" Dec 13 22:59:12.252000 audit[5036]: USER_ACCT pid=5036 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:12.254639 sshd[5036]: Accepted publickey for core from 10.0.0.1 port 37850 ssh2: RSA SHA256:wrASvn4TPBLeGSBdJR0bjeHJhgtBBrNwNgMNeW/n+/Q Dec 13 22:59:12.255000 audit[5036]: CRED_ACQ pid=5036 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:12.259782 sshd-session[5036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 22:59:12.260012 kernel: audit: type=1101 audit(1765666752.252:772): pid=5036 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:12.260044 kernel: audit: type=1103 audit(1765666752.255:773): pid=5036 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:12.261964 kernel: audit: type=1006 audit(1765666752.255:774): pid=5036 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 13 22:59:12.255000 audit[5036]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcdf5d180 a2=3 a3=0 items=0 ppid=1 pid=5036 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:12.255000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 22:59:12.268365 systemd-logind[1585]: New session 11 of user core. Dec 13 22:59:12.277659 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 22:59:12.280000 audit[5036]: USER_START pid=5036 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:12.282000 audit[5040]: CRED_ACQ pid=5040 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:12.414012 sshd[5040]: Connection closed by 10.0.0.1 port 37850 Dec 13 22:59:12.414760 sshd-session[5036]: pam_unix(sshd:session): session closed for user core Dec 13 22:59:12.414000 audit[5036]: USER_END pid=5036 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:12.414000 audit[5036]: CRED_DISP pid=5036 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:12.423312 systemd[1]: sshd@9-10.0.0.10:22-10.0.0.1:37850.service: Deactivated successfully. Dec 13 22:59:12.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.10:22-10.0.0.1:37850 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:59:12.426273 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 22:59:12.428909 systemd-logind[1585]: Session 11 logged out. Waiting for processes to exit. Dec 13 22:59:12.432193 systemd-logind[1585]: Removed session 11. Dec 13 22:59:12.434408 systemd[1]: Started sshd@10-10.0.0.10:22-10.0.0.1:37858.service - OpenSSH per-connection server daemon (10.0.0.1:37858). Dec 13 22:59:12.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.10:22-10.0.0.1:37858 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:59:12.511000 audit[5054]: USER_ACCT pid=5054 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:12.513723 sshd[5054]: Accepted publickey for core from 10.0.0.1 port 37858 ssh2: RSA SHA256:wrASvn4TPBLeGSBdJR0bjeHJhgtBBrNwNgMNeW/n+/Q Dec 13 22:59:12.512000 audit[5054]: CRED_ACQ pid=5054 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:12.513000 audit[5054]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffce062b40 a2=3 a3=0 items=0 ppid=1 pid=5054 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:12.513000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 22:59:12.516276 sshd-session[5054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 22:59:12.521404 systemd-logind[1585]: New session 12 of user core. Dec 13 22:59:12.526746 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 22:59:12.528000 audit[5054]: USER_START pid=5054 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:12.530000 audit[5058]: CRED_ACQ pid=5058 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:12.682358 sshd[5058]: Connection closed by 10.0.0.1 port 37858 Dec 13 22:59:12.683067 sshd-session[5054]: pam_unix(sshd:session): session closed for user core Dec 13 22:59:12.683000 audit[5054]: USER_END pid=5054 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:12.683000 audit[5054]: CRED_DISP pid=5054 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:12.693393 systemd[1]: sshd@10-10.0.0.10:22-10.0.0.1:37858.service: Deactivated successfully. Dec 13 22:59:12.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.10:22-10.0.0.1:37858 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:59:12.697088 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 22:59:12.699963 systemd-logind[1585]: Session 12 logged out. Waiting for processes to exit. Dec 13 22:59:12.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.10:22-10.0.0.1:37864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:59:12.703112 systemd[1]: Started sshd@11-10.0.0.10:22-10.0.0.1:37864.service - OpenSSH per-connection server daemon (10.0.0.1:37864). Dec 13 22:59:12.706630 systemd-logind[1585]: Removed session 12. Dec 13 22:59:12.767965 sshd[5078]: Accepted publickey for core from 10.0.0.1 port 37864 ssh2: RSA SHA256:wrASvn4TPBLeGSBdJR0bjeHJhgtBBrNwNgMNeW/n+/Q Dec 13 22:59:12.766000 audit[5078]: USER_ACCT pid=5078 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:12.768000 audit[5078]: CRED_ACQ pid=5078 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:12.768000 audit[5078]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe66d8a30 a2=3 a3=0 items=0 ppid=1 pid=5078 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:12.768000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 22:59:12.771037 sshd-session[5078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 22:59:12.775721 systemd-logind[1585]: New session 13 of user core. Dec 13 22:59:12.781715 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 22:59:12.782000 audit[5078]: USER_START pid=5078 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:12.783000 audit[5082]: CRED_ACQ pid=5082 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:12.892605 sshd[5082]: Connection closed by 10.0.0.1 port 37864 Dec 13 22:59:12.893273 sshd-session[5078]: pam_unix(sshd:session): session closed for user core Dec 13 22:59:12.892000 audit[5078]: USER_END pid=5078 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:12.892000 audit[5078]: CRED_DISP pid=5078 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:12.897260 systemd-logind[1585]: Session 13 logged out. Waiting for processes to exit. Dec 13 22:59:12.897459 systemd[1]: sshd@11-10.0.0.10:22-10.0.0.1:37864.service: Deactivated successfully. Dec 13 22:59:12.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.10:22-10.0.0.1:37864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:59:12.899437 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 22:59:12.902350 systemd-logind[1585]: Removed session 13. Dec 13 22:59:14.025592 containerd[1608]: time="2025-12-13T22:59:14.024308155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 13 22:59:14.236093 containerd[1608]: time="2025-12-13T22:59:14.236022922Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 22:59:14.242345 containerd[1608]: time="2025-12-13T22:59:14.242297476Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 13 22:59:14.242345 containerd[1608]: time="2025-12-13T22:59:14.242372196Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 13 22:59:14.242547 kubelet[2752]: E1213 22:59:14.242516 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 13 22:59:14.242894 kubelet[2752]: E1213 22:59:14.242582 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 13 22:59:14.242894 kubelet[2752]: E1213 22:59:14.242702 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:029548d4e33045ac9fd2ec7d416edc31,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nqvhr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7bc476bc64-64mwd_calico-system(d1f05f6c-e0c0-404f-9df9-6993eb6f715c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 13 22:59:14.245046 containerd[1608]: time="2025-12-13T22:59:14.244999394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 13 22:59:14.454039 containerd[1608]: time="2025-12-13T22:59:14.453805603Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 22:59:14.455086 containerd[1608]: time="2025-12-13T22:59:14.455038882Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 13 22:59:14.455170 containerd[1608]: time="2025-12-13T22:59:14.455124842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 13 22:59:14.455947 kubelet[2752]: E1213 22:59:14.455300 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 13 22:59:14.455947 kubelet[2752]: E1213 22:59:14.455349 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 13 22:59:14.455947 kubelet[2752]: E1213 22:59:14.455450 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nqvhr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7bc476bc64-64mwd_calico-system(d1f05f6c-e0c0-404f-9df9-6993eb6f715c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 13 22:59:14.456619 kubelet[2752]: E1213 22:59:14.456575 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7bc476bc64-64mwd" podUID="d1f05f6c-e0c0-404f-9df9-6993eb6f715c" Dec 13 22:59:17.908762 systemd[1]: Started sshd@12-10.0.0.10:22-10.0.0.1:37888.service - OpenSSH per-connection server daemon (10.0.0.1:37888). Dec 13 22:59:17.913548 kernel: kauditd_printk_skb: 29 callbacks suppressed Dec 13 22:59:17.913612 kernel: audit: type=1130 audit(1765666757.908:798): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.10:22-10.0.0.1:37888 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:59:17.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.10:22-10.0.0.1:37888 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:59:17.968000 audit[5098]: USER_ACCT pid=5098 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:17.971342 sshd[5098]: Accepted publickey for core from 10.0.0.1 port 37888 ssh2: RSA SHA256:wrASvn4TPBLeGSBdJR0bjeHJhgtBBrNwNgMNeW/n+/Q Dec 13 22:59:17.978346 sshd-session[5098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 22:59:17.976000 audit[5098]: CRED_ACQ pid=5098 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:17.981501 kernel: audit: type=1101 audit(1765666757.968:799): pid=5098 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:17.981623 kernel: audit: type=1103 audit(1765666757.976:800): pid=5098 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:17.983795 kernel: audit: type=1006 audit(1765666757.976:801): pid=5098 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 13 22:59:17.983835 kernel: audit: type=1300 audit(1765666757.976:801): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdd40a880 a2=3 a3=0 items=0 ppid=1 pid=5098 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:17.976000 audit[5098]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdd40a880 a2=3 a3=0 items=0 ppid=1 pid=5098 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:17.976000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 22:59:17.988430 kernel: audit: type=1327 audit(1765666757.976:801): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 22:59:17.992456 systemd-logind[1585]: New session 14 of user core. Dec 13 22:59:18.008841 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 22:59:18.011000 audit[5098]: USER_START pid=5098 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:18.014000 audit[5102]: CRED_ACQ pid=5102 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:18.020345 kernel: audit: type=1105 audit(1765666758.011:802): pid=5098 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:18.020394 kernel: audit: type=1103 audit(1765666758.014:803): pid=5102 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:18.115199 sshd[5102]: Connection closed by 10.0.0.1 port 37888 Dec 13 22:59:18.117217 sshd-session[5098]: pam_unix(sshd:session): session closed for user core Dec 13 22:59:18.117000 audit[5098]: USER_END pid=5098 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:18.117000 audit[5098]: CRED_DISP pid=5098 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:18.126548 kernel: audit: type=1106 audit(1765666758.117:804): pid=5098 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:18.126634 kernel: audit: type=1104 audit(1765666758.117:805): pid=5098 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:18.127763 systemd[1]: sshd@12-10.0.0.10:22-10.0.0.1:37888.service: Deactivated successfully. Dec 13 22:59:18.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.10:22-10.0.0.1:37888 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:59:18.133312 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 22:59:18.134404 systemd-logind[1585]: Session 14 logged out. Waiting for processes to exit. Dec 13 22:59:18.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.10:22-10.0.0.1:37902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:59:18.139000 systemd[1]: Started sshd@13-10.0.0.10:22-10.0.0.1:37902.service - OpenSSH per-connection server daemon (10.0.0.1:37902). Dec 13 22:59:18.140886 systemd-logind[1585]: Removed session 14. Dec 13 22:59:18.206000 audit[5116]: USER_ACCT pid=5116 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:18.208788 sshd[5116]: Accepted publickey for core from 10.0.0.1 port 37902 ssh2: RSA SHA256:wrASvn4TPBLeGSBdJR0bjeHJhgtBBrNwNgMNeW/n+/Q Dec 13 22:59:18.208000 audit[5116]: CRED_ACQ pid=5116 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:18.208000 audit[5116]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdea03460 a2=3 a3=0 items=0 ppid=1 pid=5116 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:18.208000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 22:59:18.210447 sshd-session[5116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 22:59:18.215758 systemd-logind[1585]: New session 15 of user core. Dec 13 22:59:18.228753 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 22:59:18.229000 audit[5116]: USER_START pid=5116 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:18.230000 audit[5120]: CRED_ACQ pid=5120 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:18.381759 sshd[5120]: Connection closed by 10.0.0.1 port 37902 Dec 13 22:59:18.382053 sshd-session[5116]: pam_unix(sshd:session): session closed for user core Dec 13 22:59:18.381000 audit[5116]: USER_END pid=5116 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:18.381000 audit[5116]: CRED_DISP pid=5116 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:18.397222 systemd[1]: sshd@13-10.0.0.10:22-10.0.0.1:37902.service: Deactivated successfully. Dec 13 22:59:18.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.10:22-10.0.0.1:37902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:59:18.400007 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 22:59:18.400971 systemd-logind[1585]: Session 15 logged out. Waiting for processes to exit. Dec 13 22:59:18.405011 systemd[1]: Started sshd@14-10.0.0.10:22-10.0.0.1:37916.service - OpenSSH per-connection server daemon (10.0.0.1:37916). Dec 13 22:59:18.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.10:22-10.0.0.1:37916 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:59:18.405909 systemd-logind[1585]: Removed session 15. Dec 13 22:59:18.473000 audit[5132]: USER_ACCT pid=5132 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:18.476595 sshd[5132]: Accepted publickey for core from 10.0.0.1 port 37916 ssh2: RSA SHA256:wrASvn4TPBLeGSBdJR0bjeHJhgtBBrNwNgMNeW/n+/Q Dec 13 22:59:18.475000 audit[5132]: CRED_ACQ pid=5132 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:18.475000 audit[5132]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe3bd9e20 a2=3 a3=0 items=0 ppid=1 pid=5132 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:18.475000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 22:59:18.477818 sshd-session[5132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 22:59:18.481968 systemd-logind[1585]: New session 16 of user core. Dec 13 22:59:18.489748 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 22:59:18.490000 audit[5132]: USER_START pid=5132 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:18.493000 audit[5136]: CRED_ACQ pid=5136 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:19.018000 audit[5149]: NETFILTER_CFG table=filter:147 family=2 entries=26 op=nft_register_rule pid=5149 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:59:19.018000 audit[5149]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=fffff53d9fc0 a2=0 a3=1 items=0 ppid=2863 pid=5149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:19.018000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:59:19.028000 audit[5149]: NETFILTER_CFG table=nat:148 family=2 entries=20 op=nft_register_rule pid=5149 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:59:19.028000 audit[5149]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff53d9fc0 a2=0 a3=1 items=0 ppid=2863 pid=5149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:19.028000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:59:19.031052 sshd[5136]: Connection closed by 10.0.0.1 port 37916 Dec 13 22:59:19.031320 sshd-session[5132]: pam_unix(sshd:session): session closed for user core Dec 13 22:59:19.031000 audit[5132]: USER_END pid=5132 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:19.031000 audit[5132]: CRED_DISP pid=5132 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:19.042880 systemd[1]: sshd@14-10.0.0.10:22-10.0.0.1:37916.service: Deactivated successfully. Dec 13 22:59:19.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.10:22-10.0.0.1:37916 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:59:19.050009 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 22:59:19.052214 systemd-logind[1585]: Session 16 logged out. Waiting for processes to exit. Dec 13 22:59:19.058309 systemd[1]: Started sshd@15-10.0.0.10:22-10.0.0.1:37920.service - OpenSSH per-connection server daemon (10.0.0.1:37920). Dec 13 22:59:19.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.10:22-10.0.0.1:37920 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:59:19.062510 systemd-logind[1585]: Removed session 16. Dec 13 22:59:19.060000 audit[5156]: NETFILTER_CFG table=filter:149 family=2 entries=38 op=nft_register_rule pid=5156 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:59:19.060000 audit[5156]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffdb029150 a2=0 a3=1 items=0 ppid=2863 pid=5156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:19.060000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:59:19.067000 audit[5156]: NETFILTER_CFG table=nat:150 family=2 entries=20 op=nft_register_rule pid=5156 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:59:19.067000 audit[5156]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffdb029150 a2=0 a3=1 items=0 ppid=2863 pid=5156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:19.067000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:59:19.118000 audit[5155]: USER_ACCT pid=5155 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:19.120694 sshd[5155]: Accepted publickey for core from 10.0.0.1 port 37920 ssh2: RSA SHA256:wrASvn4TPBLeGSBdJR0bjeHJhgtBBrNwNgMNeW/n+/Q Dec 13 22:59:19.120000 audit[5155]: CRED_ACQ pid=5155 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:19.120000 audit[5155]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd8968bb0 a2=3 a3=0 items=0 ppid=1 pid=5155 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:19.120000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 22:59:19.122858 sshd-session[5155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 22:59:19.127329 systemd-logind[1585]: New session 17 of user core. Dec 13 22:59:19.139763 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 22:59:19.140000 audit[5155]: USER_START pid=5155 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:19.141000 audit[5160]: CRED_ACQ pid=5160 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:19.546804 sshd[5160]: Connection closed by 10.0.0.1 port 37920 Dec 13 22:59:19.547589 sshd-session[5155]: pam_unix(sshd:session): session closed for user core Dec 13 22:59:19.549000 audit[5155]: USER_END pid=5155 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:19.549000 audit[5155]: CRED_DISP pid=5155 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:19.560472 systemd[1]: sshd@15-10.0.0.10:22-10.0.0.1:37920.service: Deactivated successfully. Dec 13 22:59:19.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.10:22-10.0.0.1:37920 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:59:19.562589 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 22:59:19.563455 systemd-logind[1585]: Session 17 logged out. Waiting for processes to exit. Dec 13 22:59:19.567012 systemd[1]: Started sshd@16-10.0.0.10:22-10.0.0.1:37932.service - OpenSSH per-connection server daemon (10.0.0.1:37932). Dec 13 22:59:19.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.10:22-10.0.0.1:37932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:59:19.568608 systemd-logind[1585]: Removed session 17. Dec 13 22:59:19.625000 audit[5171]: USER_ACCT pid=5171 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:19.627445 sshd[5171]: Accepted publickey for core from 10.0.0.1 port 37932 ssh2: RSA SHA256:wrASvn4TPBLeGSBdJR0bjeHJhgtBBrNwNgMNeW/n+/Q Dec 13 22:59:19.626000 audit[5171]: CRED_ACQ pid=5171 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:19.627000 audit[5171]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdbb55290 a2=3 a3=0 items=0 ppid=1 pid=5171 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:19.627000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 22:59:19.629438 sshd-session[5171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 22:59:19.635500 systemd-logind[1585]: New session 18 of user core. Dec 13 22:59:19.643759 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 22:59:19.645000 audit[5171]: USER_START pid=5171 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:19.647000 audit[5175]: CRED_ACQ pid=5175 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:19.738404 sshd[5175]: Connection closed by 10.0.0.1 port 37932 Dec 13 22:59:19.738933 sshd-session[5171]: pam_unix(sshd:session): session closed for user core Dec 13 22:59:19.739000 audit[5171]: USER_END pid=5171 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:19.739000 audit[5171]: CRED_DISP pid=5171 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:19.743977 systemd[1]: sshd@16-10.0.0.10:22-10.0.0.1:37932.service: Deactivated successfully. Dec 13 22:59:19.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.10:22-10.0.0.1:37932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:59:19.746397 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 22:59:19.748199 systemd-logind[1585]: Session 18 logged out. Waiting for processes to exit. Dec 13 22:59:19.750608 systemd-logind[1585]: Removed session 18. Dec 13 22:59:21.025280 containerd[1608]: time="2025-12-13T22:59:21.025109931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 13 22:59:21.249670 containerd[1608]: time="2025-12-13T22:59:21.249614561Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 22:59:21.251150 containerd[1608]: time="2025-12-13T22:59:21.251068800Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 13 22:59:21.251236 containerd[1608]: time="2025-12-13T22:59:21.251136560Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 13 22:59:21.251335 kubelet[2752]: E1213 22:59:21.251293 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 22:59:21.252101 kubelet[2752]: E1213 22:59:21.251344 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 22:59:21.252101 kubelet[2752]: E1213 22:59:21.251472 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6hm9z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7d74677ddd-xdvvs_calico-apiserver(ef0b8707-0b6c-4822-b958-0aa6bda67c50): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 13 22:59:21.254845 kubelet[2752]: E1213 22:59:21.252654 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d74677ddd-xdvvs" podUID="ef0b8707-0b6c-4822-b958-0aa6bda67c50" Dec 13 22:59:22.023393 containerd[1608]: time="2025-12-13T22:59:22.023356432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 13 22:59:22.289184 containerd[1608]: time="2025-12-13T22:59:22.289050567Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 22:59:22.290015 containerd[1608]: time="2025-12-13T22:59:22.289981487Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 13 22:59:22.290100 containerd[1608]: time="2025-12-13T22:59:22.290054647Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 13 22:59:22.290476 kubelet[2752]: E1213 22:59:22.290218 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 22:59:22.290476 kubelet[2752]: E1213 22:59:22.290271 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 22:59:22.290476 kubelet[2752]: E1213 22:59:22.290402 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5wlgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6ddf58565-58qbn_calico-apiserver(bc906328-edba-4f47-8190-6385bd5de6a4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 13 22:59:22.291684 kubelet[2752]: E1213 22:59:22.291629 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ddf58565-58qbn" podUID="bc906328-edba-4f47-8190-6385bd5de6a4" Dec 13 22:59:23.023444 containerd[1608]: time="2025-12-13T22:59:23.023410248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 13 22:59:23.239154 containerd[1608]: time="2025-12-13T22:59:23.239092898Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 22:59:23.240133 containerd[1608]: time="2025-12-13T22:59:23.240092097Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 13 22:59:23.240236 containerd[1608]: time="2025-12-13T22:59:23.240174577Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 13 22:59:23.240302 kubelet[2752]: E1213 22:59:23.240263 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 13 22:59:23.240344 kubelet[2752]: E1213 22:59:23.240311 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 13 22:59:23.240475 kubelet[2752]: E1213 22:59:23.240422 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxbwv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8dxl2_calico-system(e6e20487-ee64-4317-b075-5244d40e7b5a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 13 22:59:23.242505 containerd[1608]: time="2025-12-13T22:59:23.242475536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 13 22:59:23.442281 containerd[1608]: time="2025-12-13T22:59:23.442150554Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 22:59:23.443456 containerd[1608]: time="2025-12-13T22:59:23.443361113Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 13 22:59:23.443456 containerd[1608]: time="2025-12-13T22:59:23.443404953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 13 22:59:23.443685 kubelet[2752]: E1213 22:59:23.443605 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 13 22:59:23.444359 kubelet[2752]: E1213 22:59:23.443688 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 13 22:59:23.444359 kubelet[2752]: E1213 22:59:23.443805 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxbwv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8dxl2_calico-system(e6e20487-ee64-4317-b075-5244d40e7b5a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 13 22:59:23.445043 kubelet[2752]: E1213 22:59:23.444997 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8dxl2" podUID="e6e20487-ee64-4317-b075-5244d40e7b5a" Dec 13 22:59:24.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.10:22-10.0.0.1:36482 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:59:24.753456 systemd[1]: Started sshd@17-10.0.0.10:22-10.0.0.1:36482.service - OpenSSH per-connection server daemon (10.0.0.1:36482). Dec 13 22:59:24.756448 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 13 22:59:24.756601 kernel: audit: type=1130 audit(1765666764.751:847): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.10:22-10.0.0.1:36482 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:59:24.824000 audit[5198]: USER_ACCT pid=5198 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:24.825884 sshd[5198]: Accepted publickey for core from 10.0.0.1 port 36482 ssh2: RSA SHA256:wrASvn4TPBLeGSBdJR0bjeHJhgtBBrNwNgMNeW/n+/Q Dec 13 22:59:24.827000 audit[5198]: CRED_ACQ pid=5198 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:24.830340 sshd-session[5198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 22:59:24.832285 kernel: audit: type=1101 audit(1765666764.824:848): pid=5198 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:24.832344 kernel: audit: type=1103 audit(1765666764.827:849): pid=5198 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:24.834307 kernel: audit: type=1006 audit(1765666764.828:850): pid=5198 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Dec 13 22:59:24.828000 audit[5198]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffde33a8f0 a2=3 a3=0 items=0 ppid=1 pid=5198 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:24.836194 systemd-logind[1585]: New session 19 of user core. Dec 13 22:59:24.837388 kernel: audit: type=1300 audit(1765666764.828:850): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffde33a8f0 a2=3 a3=0 items=0 ppid=1 pid=5198 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:24.837440 kernel: audit: type=1327 audit(1765666764.828:850): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 22:59:24.828000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 22:59:24.847069 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 22:59:24.847000 audit[5198]: USER_START pid=5198 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:24.849000 audit[5202]: CRED_ACQ pid=5202 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:24.856945 kernel: audit: type=1105 audit(1765666764.847:851): pid=5198 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:24.857012 kernel: audit: type=1103 audit(1765666764.849:852): pid=5202 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:24.934392 sshd[5202]: Connection closed by 10.0.0.1 port 36482 Dec 13 22:59:24.935746 sshd-session[5198]: pam_unix(sshd:session): session closed for user core Dec 13 22:59:24.935000 audit[5198]: USER_END pid=5198 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:24.940250 systemd[1]: sshd@17-10.0.0.10:22-10.0.0.1:36482.service: Deactivated successfully. Dec 13 22:59:24.935000 audit[5198]: CRED_DISP pid=5198 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:24.943482 kernel: audit: type=1106 audit(1765666764.935:853): pid=5198 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:24.943569 kernel: audit: type=1104 audit(1765666764.935:854): pid=5198 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:24.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.10:22-10.0.0.1:36482 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:59:24.944902 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 22:59:24.945940 systemd-logind[1585]: Session 19 logged out. Waiting for processes to exit. Dec 13 22:59:24.947571 systemd-logind[1585]: Removed session 19. Dec 13 22:59:25.024782 containerd[1608]: time="2025-12-13T22:59:25.024723019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 13 22:59:25.239920 containerd[1608]: time="2025-12-13T22:59:25.239868762Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 22:59:25.240880 containerd[1608]: time="2025-12-13T22:59:25.240845842Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 13 22:59:25.240940 containerd[1608]: time="2025-12-13T22:59:25.240883842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 13 22:59:25.241086 kubelet[2752]: E1213 22:59:25.241033 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 22:59:25.241086 kubelet[2752]: E1213 22:59:25.241081 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 22:59:25.241395 kubelet[2752]: E1213 22:59:25.241207 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kbqbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7d74677ddd-w5hbt_calico-apiserver(35b16e82-6b93-4598-9c60-bbc71d0b419a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 13 22:59:25.242474 kubelet[2752]: E1213 22:59:25.242395 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d74677ddd-w5hbt" podUID="35b16e82-6b93-4598-9c60-bbc71d0b419a" Dec 13 22:59:25.493000 audit[5215]: NETFILTER_CFG table=filter:151 family=2 entries=26 op=nft_register_rule pid=5215 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:59:25.493000 audit[5215]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffef367da0 a2=0 a3=1 items=0 ppid=2863 pid=5215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:25.493000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:59:25.504000 audit[5215]: NETFILTER_CFG table=nat:152 family=2 entries=104 op=nft_register_chain pid=5215 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 22:59:25.504000 audit[5215]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffef367da0 a2=0 a3=1 items=0 ppid=2863 pid=5215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:25.504000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 22:59:26.023842 containerd[1608]: time="2025-12-13T22:59:26.023721091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 13 22:59:26.207244 containerd[1608]: time="2025-12-13T22:59:26.207196734Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 22:59:26.208201 containerd[1608]: time="2025-12-13T22:59:26.208122133Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 13 22:59:26.208322 containerd[1608]: time="2025-12-13T22:59:26.208195333Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 13 22:59:26.208388 kubelet[2752]: E1213 22:59:26.208357 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 13 22:59:26.208442 kubelet[2752]: E1213 22:59:26.208403 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 13 22:59:26.208573 kubelet[2752]: E1213 22:59:26.208515 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7v4vq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7dc99d86df-qrdhj_calico-system(de321aa1-ac48-4c1f-9093-69f544b891b9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 13 22:59:26.209699 kubelet[2752]: E1213 22:59:26.209652 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7dc99d86df-qrdhj" podUID="de321aa1-ac48-4c1f-9093-69f544b891b9" Dec 13 22:59:27.023946 containerd[1608]: time="2025-12-13T22:59:27.023909270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 13 22:59:27.223261 containerd[1608]: time="2025-12-13T22:59:27.222178032Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 22:59:27.223261 containerd[1608]: time="2025-12-13T22:59:27.223200992Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 13 22:59:27.223677 containerd[1608]: time="2025-12-13T22:59:27.223277632Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 13 22:59:27.223709 kubelet[2752]: E1213 22:59:27.223493 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 13 22:59:27.223709 kubelet[2752]: E1213 22:59:27.223538 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 13 22:59:27.223887 kubelet[2752]: E1213 22:59:27.223712 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vdcqr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vmghn_calico-system(9fe07c94-e384-4193-9d4b-ef9d906eb265): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 13 22:59:27.225132 kubelet[2752]: E1213 22:59:27.225022 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vmghn" podUID="9fe07c94-e384-4193-9d4b-ef9d906eb265" Dec 13 22:59:29.027752 kubelet[2752]: E1213 22:59:29.027692 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7bc476bc64-64mwd" podUID="d1f05f6c-e0c0-404f-9df9-6993eb6f715c" Dec 13 22:59:29.946050 systemd[1]: Started sshd@18-10.0.0.10:22-10.0.0.1:36486.service - OpenSSH per-connection server daemon (10.0.0.1:36486). Dec 13 22:59:29.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.10:22-10.0.0.1:36486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:59:29.946811 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 13 22:59:29.946849 kernel: audit: type=1130 audit(1765666769.944:858): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.10:22-10.0.0.1:36486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:59:30.004000 audit[5217]: USER_ACCT pid=5217 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:30.005895 sshd[5217]: Accepted publickey for core from 10.0.0.1 port 36486 ssh2: RSA SHA256:wrASvn4TPBLeGSBdJR0bjeHJhgtBBrNwNgMNeW/n+/Q Dec 13 22:59:30.007000 audit[5217]: CRED_ACQ pid=5217 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:30.009825 sshd-session[5217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 22:59:30.011500 kernel: audit: type=1101 audit(1765666770.004:859): pid=5217 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:30.011707 kernel: audit: type=1103 audit(1765666770.007:860): pid=5217 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:30.011732 kernel: audit: type=1006 audit(1765666770.007:861): pid=5217 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Dec 13 22:59:30.013205 kernel: audit: type=1300 audit(1765666770.007:861): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd3e55790 a2=3 a3=0 items=0 ppid=1 pid=5217 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:30.007000 audit[5217]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd3e55790 a2=3 a3=0 items=0 ppid=1 pid=5217 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:30.014258 systemd-logind[1585]: New session 20 of user core. Dec 13 22:59:30.007000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 22:59:30.017192 kernel: audit: type=1327 audit(1765666770.007:861): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 22:59:30.022734 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 22:59:30.023000 audit[5217]: USER_START pid=5217 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:30.024000 audit[5223]: CRED_ACQ pid=5223 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:30.031362 kernel: audit: type=1105 audit(1765666770.023:862): pid=5217 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:30.031420 kernel: audit: type=1103 audit(1765666770.024:863): pid=5223 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:30.110709 sshd[5223]: Connection closed by 10.0.0.1 port 36486 Dec 13 22:59:30.111039 sshd-session[5217]: pam_unix(sshd:session): session closed for user core Dec 13 22:59:30.110000 audit[5217]: USER_END pid=5217 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:30.115440 systemd[1]: sshd@18-10.0.0.10:22-10.0.0.1:36486.service: Deactivated successfully. Dec 13 22:59:30.110000 audit[5217]: CRED_DISP pid=5217 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:30.117943 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 22:59:30.118824 kernel: audit: type=1106 audit(1765666770.110:864): pid=5217 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:30.118866 kernel: audit: type=1104 audit(1765666770.110:865): pid=5217 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:30.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.10:22-10.0.0.1:36486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:59:30.119630 systemd-logind[1585]: Session 20 logged out. Waiting for processes to exit. Dec 13 22:59:30.120824 systemd-logind[1585]: Removed session 20. Dec 13 22:59:33.026012 kubelet[2752]: E1213 22:59:33.025963 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6ddf58565-58qbn" podUID="bc906328-edba-4f47-8190-6385bd5de6a4" Dec 13 22:59:33.226388 kubelet[2752]: E1213 22:59:33.226347 2752 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 22:59:34.023273 kubelet[2752]: E1213 22:59:34.023227 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d74677ddd-xdvvs" podUID="ef0b8707-0b6c-4822-b958-0aa6bda67c50" Dec 13 22:59:35.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.10:22-10.0.0.1:57676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:59:35.127205 systemd[1]: Started sshd@19-10.0.0.10:22-10.0.0.1:57676.service - OpenSSH per-connection server daemon (10.0.0.1:57676). Dec 13 22:59:35.127931 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 22:59:35.127978 kernel: audit: type=1130 audit(1765666775.125:867): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.10:22-10.0.0.1:57676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:59:35.178000 audit[5264]: USER_ACCT pid=5264 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:35.180125 sshd[5264]: Accepted publickey for core from 10.0.0.1 port 57676 ssh2: RSA SHA256:wrASvn4TPBLeGSBdJR0bjeHJhgtBBrNwNgMNeW/n+/Q Dec 13 22:59:35.182975 sshd-session[5264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 22:59:35.180000 audit[5264]: CRED_ACQ pid=5264 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:35.185604 kernel: audit: type=1101 audit(1765666775.178:868): pid=5264 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:35.185979 kernel: audit: type=1103 audit(1765666775.180:869): pid=5264 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:35.188722 systemd-logind[1585]: New session 21 of user core. Dec 13 22:59:35.196491 kernel: audit: type=1006 audit(1765666775.180:870): pid=5264 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Dec 13 22:59:35.196572 kernel: audit: type=1300 audit(1765666775.180:870): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc16c8960 a2=3 a3=0 items=0 ppid=1 pid=5264 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:35.180000 audit[5264]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc16c8960 a2=3 a3=0 items=0 ppid=1 pid=5264 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:35.180000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 22:59:35.202818 kernel: audit: type=1327 audit(1765666775.180:870): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 22:59:35.207777 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 22:59:35.208000 audit[5264]: USER_START pid=5264 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:35.210000 audit[5268]: CRED_ACQ pid=5268 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:35.220725 kernel: audit: type=1105 audit(1765666775.208:871): pid=5264 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:35.220799 kernel: audit: type=1103 audit(1765666775.210:872): pid=5268 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:35.289270 sshd[5268]: Connection closed by 10.0.0.1 port 57676 Dec 13 22:59:35.289170 sshd-session[5264]: pam_unix(sshd:session): session closed for user core Dec 13 22:59:35.290000 audit[5264]: USER_END pid=5264 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:35.294643 systemd-logind[1585]: Session 21 logged out. Waiting for processes to exit. Dec 13 22:59:35.294890 systemd[1]: sshd@19-10.0.0.10:22-10.0.0.1:57676.service: Deactivated successfully. Dec 13 22:59:35.296501 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 22:59:35.290000 audit[5264]: CRED_DISP pid=5264 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:35.298186 systemd-logind[1585]: Removed session 21. Dec 13 22:59:35.299746 kernel: audit: type=1106 audit(1765666775.290:873): pid=5264 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:35.299833 kernel: audit: type=1104 audit(1765666775.290:874): pid=5264 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:35.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.10:22-10.0.0.1:57676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:59:36.023753 kubelet[2752]: E1213 22:59:36.023689 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d74677ddd-w5hbt" podUID="35b16e82-6b93-4598-9c60-bbc71d0b419a" Dec 13 22:59:37.025098 kubelet[2752]: E1213 22:59:37.025013 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8dxl2" podUID="e6e20487-ee64-4317-b075-5244d40e7b5a" Dec 13 22:59:38.023262 kubelet[2752]: E1213 22:59:38.023166 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7dc99d86df-qrdhj" podUID="de321aa1-ac48-4c1f-9093-69f544b891b9" Dec 13 22:59:40.023707 kubelet[2752]: E1213 22:59:40.023648 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vmghn" podUID="9fe07c94-e384-4193-9d4b-ef9d906eb265" Dec 13 22:59:40.301914 systemd[1]: Started sshd@20-10.0.0.10:22-10.0.0.1:57682.service - OpenSSH per-connection server daemon (10.0.0.1:57682). Dec 13 22:59:40.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.10:22-10.0.0.1:57682 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:59:40.306044 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 22:59:40.306093 kernel: audit: type=1130 audit(1765666780.300:876): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.10:22-10.0.0.1:57682 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:59:40.374000 audit[5286]: USER_ACCT pid=5286 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:40.376211 sshd[5286]: Accepted publickey for core from 10.0.0.1 port 57682 ssh2: RSA SHA256:wrASvn4TPBLeGSBdJR0bjeHJhgtBBrNwNgMNeW/n+/Q Dec 13 22:59:40.377000 audit[5286]: CRED_ACQ pid=5286 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:40.379420 sshd-session[5286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 22:59:40.381497 kernel: audit: type=1101 audit(1765666780.374:877): pid=5286 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:40.381550 kernel: audit: type=1103 audit(1765666780.377:878): pid=5286 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:40.383258 kernel: audit: type=1006 audit(1765666780.377:879): pid=5286 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 13 22:59:40.377000 audit[5286]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe58f10c0 a2=3 a3=0 items=0 ppid=1 pid=5286 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:40.386592 kernel: audit: type=1300 audit(1765666780.377:879): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe58f10c0 a2=3 a3=0 items=0 ppid=1 pid=5286 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 22:59:40.377000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 22:59:40.387847 kernel: audit: type=1327 audit(1765666780.377:879): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 22:59:40.389233 systemd-logind[1585]: New session 22 of user core. Dec 13 22:59:40.399736 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 22:59:40.401000 audit[5286]: USER_START pid=5286 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:40.402000 audit[5290]: CRED_ACQ pid=5290 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:40.411171 kernel: audit: type=1105 audit(1765666780.401:880): pid=5286 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:40.411251 kernel: audit: type=1103 audit(1765666780.402:881): pid=5290 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:40.507402 sshd[5290]: Connection closed by 10.0.0.1 port 57682 Dec 13 22:59:40.508953 sshd-session[5286]: pam_unix(sshd:session): session closed for user core Dec 13 22:59:40.508000 audit[5286]: USER_END pid=5286 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:40.514586 systemd[1]: sshd@20-10.0.0.10:22-10.0.0.1:57682.service: Deactivated successfully. Dec 13 22:59:40.508000 audit[5286]: CRED_DISP pid=5286 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:40.516312 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 22:59:40.517601 kernel: audit: type=1106 audit(1765666780.508:882): pid=5286 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:40.517720 kernel: audit: type=1104 audit(1765666780.508:883): pid=5286 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 22:59:40.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.10:22-10.0.0.1:57682 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 22:59:40.518172 systemd-logind[1585]: Session 22 logged out. Waiting for processes to exit. Dec 13 22:59:40.520336 systemd-logind[1585]: Removed session 22. Dec 13 22:59:41.025772 containerd[1608]: time="2025-12-13T22:59:41.025721367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 13 22:59:41.236237 containerd[1608]: time="2025-12-13T22:59:41.236175036Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 22:59:41.237247 containerd[1608]: time="2025-12-13T22:59:41.237205362Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 13 22:59:41.237346 containerd[1608]: time="2025-12-13T22:59:41.237283363Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 13 22:59:41.237489 kubelet[2752]: E1213 22:59:41.237438 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 13 22:59:41.237779 kubelet[2752]: E1213 22:59:41.237490 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 13 22:59:41.237779 kubelet[2752]: E1213 22:59:41.237614 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:029548d4e33045ac9fd2ec7d416edc31,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nqvhr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7bc476bc64-64mwd_calico-system(d1f05f6c-e0c0-404f-9df9-6993eb6f715c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 13 22:59:41.239666 containerd[1608]: time="2025-12-13T22:59:41.239638657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 13 22:59:41.457106 containerd[1608]: time="2025-12-13T22:59:41.456983689Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 22:59:41.458685 containerd[1608]: time="2025-12-13T22:59:41.458648019Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 13 22:59:41.458811 containerd[1608]: time="2025-12-13T22:59:41.458725859Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 13 22:59:41.458908 kubelet[2752]: E1213 22:59:41.458849 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 13 22:59:41.458965 kubelet[2752]: E1213 22:59:41.458918 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 13 22:59:41.459125 kubelet[2752]: E1213 22:59:41.459039 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nqvhr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7bc476bc64-64mwd_calico-system(d1f05f6c-e0c0-404f-9df9-6993eb6f715c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 13 22:59:41.460239 kubelet[2752]: E1213 22:59:41.460208 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7bc476bc64-64mwd" podUID="d1f05f6c-e0c0-404f-9df9-6993eb6f715c"